00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 975 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3642 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.089 Using shallow fetch with depth 1 00:00:00.089 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.089 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.122 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.153 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.153 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.450 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.461 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.472 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.472 > git config core.sparsecheckout # timeout=10 00:00:03.482 > git read-tree -mu HEAD # timeout=10 00:00:03.496 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.513 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.513 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.641 [Pipeline] Start of Pipeline 00:00:03.654 [Pipeline] library 00:00:03.655 Loading library shm_lib@master 00:00:03.655 Library shm_lib@master is cached. Copying from home. 00:00:03.669 [Pipeline] node 00:00:03.689 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:03.691 [Pipeline] { 00:00:03.700 [Pipeline] catchError 00:00:03.702 [Pipeline] { 00:00:03.711 [Pipeline] wrap 00:00:03.717 [Pipeline] { 00:00:03.722 [Pipeline] stage 00:00:03.723 [Pipeline] { (Prologue) 00:00:03.740 [Pipeline] echo 00:00:03.742 Node: VM-host-SM16 00:00:03.748 [Pipeline] cleanWs 00:00:03.760 [WS-CLEANUP] Deleting project workspace... 00:00:03.760 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.766 [WS-CLEANUP] done 00:00:03.950 [Pipeline] setCustomBuildProperty 00:00:04.024 [Pipeline] httpRequest 00:00:04.590 [Pipeline] echo 00:00:04.592 Sorcerer 10.211.164.20 is alive 00:00:04.601 [Pipeline] retry 00:00:04.603 [Pipeline] { 00:00:04.612 [Pipeline] httpRequest 00:00:04.616 HttpMethod: GET 00:00:04.617 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.617 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.623 Response Code: HTTP/1.1 200 OK 00:00:04.623 Success: Status code 200 is in the accepted range: 200,404 00:00:04.623 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.259 [Pipeline] } 00:00:06.274 [Pipeline] // retry 00:00:06.281 [Pipeline] sh 00:00:06.559 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.574 [Pipeline] httpRequest 00:00:08.161 [Pipeline] echo 00:00:08.163 Sorcerer 10.211.164.20 is alive 00:00:08.173 [Pipeline] retry 00:00:08.175 [Pipeline] { 00:00:08.189 [Pipeline] httpRequest 00:00:08.194 HttpMethod: GET 00:00:08.195 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.195 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.211 Response Code: HTTP/1.1 200 OK 00:00:08.211 Success: Status code 200 is in the accepted range: 200,404 00:00:08.212 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:52.301 [Pipeline] } 00:00:52.319 [Pipeline] // retry 00:00:52.326 [Pipeline] sh 00:00:52.607 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:55.159 [Pipeline] sh 00:00:55.441 + git -C spdk log --oneline -n5 00:00:55.441 c13c99a5e test: Various fixes for Fedora40 00:00:55.441 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:55.441 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:55.441 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:55.441 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:55.463 [Pipeline] withCredentials 00:00:55.475 > git --version # timeout=10 00:00:55.489 > git --version # 'git version 2.39.2' 00:00:55.505 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:55.507 [Pipeline] { 00:00:55.517 [Pipeline] retry 00:00:55.519 [Pipeline] { 00:00:55.534 [Pipeline] sh 00:00:55.816 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:56.087 [Pipeline] } 00:00:56.107 [Pipeline] // retry 00:00:56.113 [Pipeline] } 00:00:56.129 [Pipeline] // withCredentials 00:00:56.139 [Pipeline] httpRequest 00:00:56.720 [Pipeline] echo 00:00:56.722 Sorcerer 10.211.164.20 is alive 00:00:56.732 [Pipeline] retry 00:00:56.734 [Pipeline] { 00:00:56.749 [Pipeline] httpRequest 00:00:56.753 HttpMethod: GET 00:00:56.754 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.754 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.756 Response Code: HTTP/1.1 200 OK 00:00:56.757 Success: Status code 200 is in the accepted range: 200,404 00:00:56.757 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:02.655 [Pipeline] } 00:01:02.672 [Pipeline] // retry 00:01:02.680 [Pipeline] sh 00:01:02.963 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:04.353 [Pipeline] sh 00:01:04.635 + git -C dpdk log --oneline -n5 00:01:04.635 caf0f5d395 version: 22.11.4 00:01:04.635 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:04.635 dc9c799c7d vhost: fix missing spinlock unlock 00:01:04.635 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:04.635 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:04.653 [Pipeline] writeFile 00:01:04.669 [Pipeline] sh 00:01:04.951 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.963 [Pipeline] sh 00:01:05.245 + cat autorun-spdk.conf 00:01:05.245 SPDK_TEST_UNITTEST=1 00:01:05.245 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.245 SPDK_TEST_NVME=1 00:01:05.245 SPDK_TEST_BLOCKDEV=1 00:01:05.245 SPDK_RUN_ASAN=1 00:01:05.245 SPDK_RUN_UBSAN=1 00:01:05.245 SPDK_TEST_RAID5=1 00:01:05.245 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:05.245 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:05.245 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.252 RUN_NIGHTLY=1 00:01:05.258 [Pipeline] } 00:01:05.274 [Pipeline] // stage 00:01:05.289 [Pipeline] stage 00:01:05.291 [Pipeline] { (Run VM) 00:01:05.304 [Pipeline] sh 00:01:05.587 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:05.587 + echo 'Start stage prepare_nvme.sh' 00:01:05.587 Start stage prepare_nvme.sh 00:01:05.587 + [[ -n 5 ]] 00:01:05.587 + disk_prefix=ex5 00:01:05.587 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:01:05.587 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:01:05.587 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:01:05.587 ++ SPDK_TEST_UNITTEST=1 00:01:05.587 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:05.587 ++ SPDK_TEST_NVME=1 00:01:05.587 ++ SPDK_TEST_BLOCKDEV=1 00:01:05.587 ++ SPDK_RUN_ASAN=1 00:01:05.587 ++ SPDK_RUN_UBSAN=1 00:01:05.587 ++ SPDK_TEST_RAID5=1 00:01:05.587 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:05.587 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:05.587 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:05.587 ++ RUN_NIGHTLY=1 00:01:05.587 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:05.587 + nvme_files=() 00:01:05.587 + declare -A nvme_files 00:01:05.587 + backend_dir=/var/lib/libvirt/images/backends 00:01:05.587 + nvme_files['nvme.img']=5G 00:01:05.587 + nvme_files['nvme-cmb.img']=5G 00:01:05.587 + nvme_files['nvme-multi0.img']=4G 00:01:05.587 + nvme_files['nvme-multi1.img']=4G 00:01:05.587 + nvme_files['nvme-multi2.img']=4G 00:01:05.587 + nvme_files['nvme-openstack.img']=8G 00:01:05.587 + nvme_files['nvme-zns.img']=5G 00:01:05.587 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:05.587 + (( SPDK_TEST_FTL == 1 )) 00:01:05.587 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:05.587 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:05.587 + for nvme in "${!nvme_files[@]}" 00:01:05.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:05.587 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.587 + for nvme in "${!nvme_files[@]}" 00:01:05.587 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:06.154 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.154 + for nvme in "${!nvme_files[@]}" 00:01:06.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:06.154 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:06.154 + for nvme in "${!nvme_files[@]}" 00:01:06.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:06.154 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.154 + for nvme in "${!nvme_files[@]}" 00:01:06.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:06.154 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.154 + for nvme in "${!nvme_files[@]}" 00:01:06.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:06.154 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.154 + for nvme in "${!nvme_files[@]}" 00:01:06.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:06.722 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:06.722 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:06.722 + echo 'End stage prepare_nvme.sh' 00:01:06.722 End stage prepare_nvme.sh 00:01:06.735 [Pipeline] sh 00:01:07.051 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.051 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f ubuntu2204 00:01:07.051 00:01:07.051 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:01:07.051 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:01:07.051 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:01:07.051 HELP=0 00:01:07.051 DRY_RUN=0 00:01:07.051 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:01:07.051 NVME_DISKS_TYPE=nvme, 00:01:07.051 NVME_AUTO_CREATE=0 00:01:07.051 NVME_DISKS_NAMESPACES=, 00:01:07.051 NVME_CMB=, 00:01:07.051 NVME_PMR=, 00:01:07.051 NVME_ZNS=, 00:01:07.051 NVME_MS=, 00:01:07.051 NVME_FDP=, 00:01:07.051 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:07.051 SPDK_VAGRANT_VMCPU=10 00:01:07.051 SPDK_VAGRANT_VMRAM=12288 00:01:07.051 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.051 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:07.051 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.051 SPDK_OPENSTACK_NETWORK=0 00:01:07.051 VAGRANT_PACKAGE_BOX=0 00:01:07.051 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:07.051 FORCE_DISTRO=true 00:01:07.051 VAGRANT_BOX_VERSION= 00:01:07.051 EXTRA_VAGRANTFILES= 00:01:07.051 NIC_MODEL=e1000 00:01:07.051 00:01:07.051 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:01:07.051 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:01:09.584 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.152 ==> default: Creating image (snapshot of base box volume). 00:01:10.152 ==> default: Creating domain with the following settings... 00:01:10.153 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1731938581_ac2c2cf976e1d62d1b05 00:01:10.153 ==> default: -- Domain type: kvm 00:01:10.153 ==> default: -- Cpus: 10 00:01:10.153 ==> default: -- Feature: acpi 00:01:10.153 ==> default: -- Feature: apic 00:01:10.153 ==> default: -- Feature: pae 00:01:10.153 ==> default: -- Memory: 12288M 00:01:10.153 ==> default: -- Memory Backing: hugepages: 00:01:10.153 ==> default: -- Management MAC: 00:01:10.153 ==> default: -- Loader: 00:01:10.153 ==> default: -- Nvram: 00:01:10.153 ==> default: -- Base box: spdk/ubuntu2204 00:01:10.153 ==> default: -- Storage pool: default 00:01:10.153 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1731938581_ac2c2cf976e1d62d1b05.img (20G) 00:01:10.153 ==> default: -- Volume Cache: default 00:01:10.153 ==> default: -- Kernel: 00:01:10.153 ==> default: -- Initrd: 00:01:10.153 ==> default: -- Graphics Type: vnc 00:01:10.153 ==> default: -- Graphics Port: -1 00:01:10.153 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.153 ==> default: -- Graphics Password: Not defined 00:01:10.153 ==> default: -- Video Type: cirrus 00:01:10.153 ==> default: -- Video VRAM: 9216 00:01:10.153 ==> default: -- Sound Type: 00:01:10.153 ==> default: -- Keymap: en-us 00:01:10.153 ==> default: -- TPM Path: 00:01:10.153 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.153 ==> default: -- Command line args: 00:01:10.153 ==> default: -> value=-device, 00:01:10.153 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:10.153 ==> default: -> value=-drive, 00:01:10.153 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:10.153 ==> default: -> value=-device, 00:01:10.153 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.153 ==> default: Creating shared folders metadata... 00:01:10.153 ==> default: Starting domain. 00:01:11.532 ==> default: Waiting for domain to get an IP address... 00:01:21.507 ==> default: Waiting for SSH to become available... 00:01:22.444 ==> default: Configuring and enabling network interfaces... 00:01:26.635 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:31.909 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:36.098 ==> default: Mounting SSHFS shared folder... 00:01:36.666 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:36.666 ==> default: Checking Mount.. 00:01:37.602 ==> default: Folder Successfully Mounted! 00:01:37.602 ==> default: Running provisioner: file... 00:01:37.861 default: ~/.gitconfig => .gitconfig 00:01:38.119 00:01:38.119 SUCCESS! 00:01:38.119 00:01:38.119 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:38.119 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.119 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:38.119 00:01:38.128 [Pipeline] } 00:01:38.142 [Pipeline] // stage 00:01:38.151 [Pipeline] dir 00:01:38.152 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:38.153 [Pipeline] { 00:01:38.165 [Pipeline] catchError 00:01:38.167 [Pipeline] { 00:01:38.178 [Pipeline] sh 00:01:38.508 + vagrant ssh-config --host vagrant 00:01:38.508 + sed -ne /^Host/,$p 00:01:38.508 + tee ssh_conf 00:01:41.794 Host vagrant 00:01:41.794 HostName 192.168.121.139 00:01:41.794 User vagrant 00:01:41.794 Port 22 00:01:41.794 UserKnownHostsFile /dev/null 00:01:41.794 StrictHostKeyChecking no 00:01:41.794 PasswordAuthentication no 00:01:41.794 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:41.794 IdentitiesOnly yes 00:01:41.794 LogLevel FATAL 00:01:41.794 ForwardAgent yes 00:01:41.794 ForwardX11 yes 00:01:41.794 00:01:41.809 [Pipeline] withEnv 00:01:41.811 [Pipeline] { 00:01:41.825 [Pipeline] sh 00:01:42.106 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:42.106 source /etc/os-release 00:01:42.106 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.106 # Minimal, systemd-like check. 00:01:42.106 if [[ -e /.dockerenv ]]; then 00:01:42.106 # Clear garbage from the node's name: 00:01:42.106 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.106 # $HOSTNAME is the actual container id 00:01:42.106 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.106 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.106 # We can assume this is a mount from a host where container is running, 00:01:42.106 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.106 container="$(< /etc/hostname) ($agent)" 00:01:42.106 else 00:01:42.106 # Fallback 00:01:42.106 container=$agent 00:01:42.106 fi 00:01:42.106 fi 00:01:42.106 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.106 00:01:42.377 [Pipeline] } 00:01:42.395 [Pipeline] // withEnv 00:01:42.403 [Pipeline] setCustomBuildProperty 00:01:42.418 [Pipeline] stage 00:01:42.421 [Pipeline] { (Tests) 00:01:42.441 [Pipeline] sh 00:01:42.725 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:42.998 [Pipeline] sh 00:01:43.279 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.552 [Pipeline] timeout 00:01:43.553 Timeout set to expire in 1 hr 30 min 00:01:43.555 [Pipeline] { 00:01:43.569 [Pipeline] sh 00:01:43.850 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.417 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:44.429 [Pipeline] sh 00:01:44.709 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:44.982 [Pipeline] sh 00:01:45.262 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:45.536 [Pipeline] sh 00:01:45.817 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:46.075 ++ readlink -f spdk_repo 00:01:46.075 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.075 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.075 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.075 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.075 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.075 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.075 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.075 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:46.075 + cd /home/vagrant/spdk_repo 00:01:46.075 + source /etc/os-release 00:01:46.075 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:46.075 ++ NAME=Ubuntu 00:01:46.075 ++ VERSION_ID=22.04 00:01:46.075 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:46.075 ++ VERSION_CODENAME=jammy 00:01:46.075 ++ ID=ubuntu 00:01:46.075 ++ ID_LIKE=debian 00:01:46.075 ++ HOME_URL=https://www.ubuntu.com/ 00:01:46.075 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:46.075 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:46.075 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:46.075 ++ UBUNTU_CODENAME=jammy 00:01:46.075 + uname -a 00:01:46.075 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:46.075 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.075 Hugepages 00:01:46.075 node hugesize free / total 00:01:46.075 node0 1048576kB 0 / 0 00:01:46.075 node0 2048kB 0 / 0 00:01:46.075 00:01:46.075 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.075 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:46.335 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:46.335 + rm -f /tmp/spdk-ld-path 00:01:46.335 + source autorun-spdk.conf 00:01:46.335 ++ SPDK_TEST_UNITTEST=1 00:01:46.335 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.335 ++ SPDK_TEST_NVME=1 00:01:46.335 ++ SPDK_TEST_BLOCKDEV=1 00:01:46.335 ++ SPDK_RUN_ASAN=1 00:01:46.335 ++ SPDK_RUN_UBSAN=1 00:01:46.335 ++ SPDK_TEST_RAID5=1 00:01:46.335 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.335 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.335 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.335 ++ RUN_NIGHTLY=1 00:01:46.335 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.335 + [[ -n '' ]] 00:01:46.335 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:46.335 + for M in /var/spdk/build-*-manifest.txt 00:01:46.335 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.335 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.335 + for M in /var/spdk/build-*-manifest.txt 00:01:46.335 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.335 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:46.335 ++ uname 00:01:46.335 + [[ Linux == \L\i\n\u\x ]] 00:01:46.335 + sudo dmesg -T 00:01:46.335 + sudo dmesg --clear 00:01:46.335 + dmesg_pid=2253 00:01:46.335 + [[ Ubuntu == FreeBSD ]] 00:01:46.335 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.335 + sudo dmesg -Tw 00:01:46.335 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.335 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.335 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.335 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.335 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.335 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.335 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:46.335 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:46.335 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:46.335 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.335 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:46.335 Test configuration: 00:01:46.335 SPDK_TEST_UNITTEST=1 00:01:46.335 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.335 SPDK_TEST_NVME=1 00:01:46.335 SPDK_TEST_BLOCKDEV=1 00:01:46.335 SPDK_RUN_ASAN=1 00:01:46.335 SPDK_RUN_UBSAN=1 00:01:46.335 SPDK_TEST_RAID5=1 00:01:46.335 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:46.335 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:46.335 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.335 RUN_NIGHTLY=1 14:03:37 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:46.335 14:03:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:46.335 14:03:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.335 14:03:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.335 14:03:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.335 14:03:37 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.335 14:03:37 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.335 14:03:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.335 14:03:37 -- paths/export.sh@5 -- $ export PATH 00:01:46.335 14:03:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:46.335 14:03:37 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:46.335 14:03:37 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:46.335 14:03:37 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731938617.XXXXXX 00:01:46.335 14:03:37 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731938617.F77MMR 00:01:46.335 14:03:37 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:46.335 14:03:37 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:46.335 14:03:37 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:46.335 14:03:37 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:46.335 14:03:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:46.335 14:03:37 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.335 14:03:37 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:46.335 14:03:37 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:46.335 14:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.335 14:03:37 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:46.335 14:03:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:46.335 14:03:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:46.335 14:03:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:46.335 14:03:37 -- spdk/autobuild.sh@16 -- $ date -u 00:01:46.335 Mon Nov 18 14:03:37 UTC 2024 00:01:46.335 14:03:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:46.335 LTS-67-gc13c99a5e 00:01:46.335 14:03:37 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:46.335 14:03:37 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:46.335 14:03:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:46.335 14:03:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.335 14:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.595 ************************************ 00:01:46.595 START TEST asan 00:01:46.595 ************************************ 00:01:46.595 using asan 00:01:46.595 14:03:37 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:46.595 00:01:46.595 real 0m0.000s 00:01:46.595 user 0m0.000s 00:01:46.595 sys 0m0.000s 00:01:46.595 14:03:37 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:46.595 ************************************ 00:01:46.595 END TEST asan 00:01:46.595 14:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.595 ************************************ 00:01:46.595 14:03:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:46.595 14:03:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:46.595 14:03:37 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:46.595 14:03:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.595 14:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.595 ************************************ 00:01:46.595 START TEST ubsan 00:01:46.595 ************************************ 00:01:46.595 using ubsan 00:01:46.595 14:03:37 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:46.595 00:01:46.595 real 0m0.000s 00:01:46.595 user 0m0.000s 00:01:46.595 sys 0m0.000s 00:01:46.595 14:03:37 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:46.595 14:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.595 ************************************ 00:01:46.595 END TEST ubsan 00:01:46.595 ************************************ 00:01:46.595 14:03:37 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:46.595 14:03:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:46.595 14:03:37 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:46.595 14:03:37 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:46.595 14:03:37 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:46.595 14:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.595 ************************************ 00:01:46.595 START TEST build_native_dpdk 00:01:46.595 ************************************ 00:01:46.595 14:03:37 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:46.595 14:03:37 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:46.595 14:03:37 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:46.595 14:03:37 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:46.595 14:03:37 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:46.595 14:03:37 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:46.595 14:03:37 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:46.595 14:03:37 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:46.595 14:03:37 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:46.595 14:03:37 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:46.595 14:03:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:46.595 14:03:37 -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:01:46.595 14:03:37 -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:01:46.595 14:03:37 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:46.595 14:03:37 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:46.595 14:03:37 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:46.595 14:03:37 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:46.595 14:03:37 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:46.595 caf0f5d395 version: 22.11.4 00:01:46.595 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:46.595 dc9c799c7d vhost: fix missing spinlock unlock 00:01:46.595 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:46.595 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:46.595 14:03:37 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:46.595 14:03:37 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:46.595 14:03:37 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:46.595 14:03:37 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:46.595 14:03:37 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:46.595 14:03:37 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:46.595 14:03:37 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:46.595 14:03:37 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:46.595 14:03:37 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:46.595 14:03:37 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:46.595 14:03:37 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:46.596 14:03:37 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:46.596 14:03:37 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:46.596 14:03:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:46.596 14:03:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:46.596 14:03:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:46.596 14:03:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:46.596 14:03:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.596 14:03:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:46.596 14:03:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:46.596 14:03:37 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:46.596 14:03:37 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:46.596 14:03:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:46.596 14:03:37 -- scripts/common.sh@343 -- $ case "$op" in 00:01:46.596 14:03:37 -- scripts/common.sh@344 -- $ : 1 00:01:46.596 14:03:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:46.596 14:03:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.596 14:03:37 -- scripts/common.sh@364 -- $ decimal 22 00:01:46.596 14:03:37 -- scripts/common.sh@352 -- $ local d=22 00:01:46.596 14:03:37 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:46.596 14:03:37 -- scripts/common.sh@354 -- $ echo 22 00:01:46.596 14:03:37 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:46.596 14:03:37 -- scripts/common.sh@365 -- $ decimal 21 00:01:46.596 14:03:37 -- scripts/common.sh@352 -- $ local d=21 00:01:46.596 14:03:37 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:46.596 14:03:37 -- scripts/common.sh@354 -- $ echo 21 00:01:46.596 14:03:37 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:46.596 14:03:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:46.596 14:03:37 -- scripts/common.sh@366 -- $ return 1 00:01:46.596 14:03:37 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:46.596 patching file config/rte_config.h 00:01:46.596 Hunk #1 succeeded at 60 (offset 1 line). 00:01:46.596 14:03:37 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:46.596 14:03:37 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:46.596 14:03:37 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:46.596 14:03:37 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:46.596 14:03:37 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:46.596 14:03:37 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:46.596 14:03:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.596 14:03:37 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:46.596 14:03:37 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:46.596 14:03:37 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:46.596 14:03:37 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:46.596 14:03:37 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:46.596 14:03:37 -- scripts/common.sh@343 -- $ case "$op" in 00:01:46.596 14:03:37 -- scripts/common.sh@344 -- $ : 1 00:01:46.596 14:03:37 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:46.596 14:03:37 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.596 14:03:37 -- scripts/common.sh@364 -- $ decimal 22 00:01:46.596 14:03:37 -- scripts/common.sh@352 -- $ local d=22 00:01:46.596 14:03:37 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:46.596 14:03:37 -- scripts/common.sh@354 -- $ echo 22 00:01:46.596 14:03:37 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:46.596 14:03:37 -- scripts/common.sh@365 -- $ decimal 24 00:01:46.596 14:03:37 -- scripts/common.sh@352 -- $ local d=24 00:01:46.596 14:03:37 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:46.596 14:03:37 -- scripts/common.sh@354 -- $ echo 24 00:01:46.596 14:03:37 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:46.596 14:03:37 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:46.596 14:03:37 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:46.596 14:03:37 -- scripts/common.sh@367 -- $ return 0 00:01:46.596 14:03:37 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:46.596 patching file lib/pcapng/rte_pcapng.c 00:01:46.596 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:46.596 14:03:37 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:46.596 14:03:37 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:46.596 14:03:37 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:46.596 14:03:37 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:46.596 14:03:37 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:51.866 The Meson build system 00:01:51.866 Version: 1.4.0 00:01:51.866 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:51.866 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:51.866 Build type: native build 00:01:51.866 Program cat found: YES (/usr/bin/cat) 00:01:51.866 Project name: DPDK 00:01:51.866 Project version: 22.11.4 00:01:51.866 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:01:51.866 C linker for the host machine: gcc ld.bfd 2.38 00:01:51.866 Host machine cpu family: x86_64 00:01:51.866 Host machine cpu: x86_64 00:01:51.866 Message: ## Building in Developer Mode ## 00:01:51.866 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.866 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:51.866 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.866 Program objdump found: YES (/usr/bin/objdump) 00:01:51.866 Program python3 found: YES (/usr/bin/python3) 00:01:51.866 Program cat found: YES (/usr/bin/cat) 00:01:51.866 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:51.866 Checking for size of "void *" : 8 00:01:51.866 Checking for size of "void *" : 8 (cached) 00:01:51.866 Library m found: YES 00:01:51.866 Library numa found: YES 00:01:51.866 Has header "numaif.h" : YES 00:01:51.866 Library fdt found: NO 00:01:51.866 Library execinfo found: NO 00:01:51.866 Has header "execinfo.h" : YES 00:01:51.866 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:01:51.866 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.866 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.866 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.866 Run-time dependency openssl found: YES 3.0.2 00:01:51.866 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:51.866 Library pcap found: NO 00:01:51.866 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.866 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.866 Compiler for C supports arguments -Wformat: YES 00:01:51.866 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:51.866 Compiler for C supports arguments -Wformat-security: YES 00:01:51.866 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.866 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.866 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.866 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.866 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.866 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.866 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.866 Compiler for C supports arguments -Wundef: YES 00:01:51.866 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.866 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.866 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.866 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.866 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.866 Compiler for C supports arguments -mavx512f: YES 00:01:51.866 Checking if "AVX512 checking" compiles: YES 00:01:51.866 Fetching value of define "__SSE4_2__" : 1 00:01:51.866 Fetching value of define "__AES__" : 1 00:01:51.866 Fetching value of define "__AVX__" : 1 00:01:51.866 Fetching value of define "__AVX2__" : 1 00:01:51.866 Fetching value of define "__AVX512BW__" : (undefined) 00:01:51.866 Fetching value of define "__AVX512CD__" : (undefined) 00:01:51.866 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:51.866 Fetching value of define "__AVX512F__" : (undefined) 00:01:51.866 Fetching value of define "__AVX512VL__" : (undefined) 00:01:51.866 Fetching value of define "__PCLMUL__" : 1 00:01:51.866 Fetching value of define "__RDRND__" : 1 00:01:51.866 Fetching value of define "__RDSEED__" : 1 00:01:51.866 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:51.866 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.866 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.866 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.866 Checking for function "getentropy" : YES 00:01:51.866 Message: lib/eal: Defining dependency "eal" 00:01:51.866 Message: lib/ring: Defining dependency "ring" 00:01:51.866 Message: lib/rcu: Defining dependency "rcu" 00:01:51.866 Message: lib/mempool: Defining dependency "mempool" 00:01:51.866 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.866 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.866 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.866 Compiler for C supports arguments -mpclmul: YES 00:01:51.866 Compiler for C supports arguments -maes: YES 00:01:51.866 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.867 Compiler for C supports arguments -mavx512bw: YES 00:01:51.867 Compiler for C supports arguments -mavx512dq: YES 00:01:51.867 Compiler for C supports arguments -mavx512vl: YES 00:01:51.867 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.867 Compiler for C supports arguments -mavx2: YES 00:01:51.867 Compiler for C supports arguments -mavx: YES 00:01:51.867 Message: lib/net: Defining dependency "net" 00:01:51.867 Message: lib/meter: Defining dependency "meter" 00:01:51.867 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.867 Message: lib/pci: Defining dependency "pci" 00:01:51.867 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.867 Message: lib/metrics: Defining dependency "metrics" 00:01:51.867 Message: lib/hash: Defining dependency "hash" 00:01:51.867 Message: lib/timer: Defining dependency "timer" 00:01:51.867 Fetching value of define "__AVX2__" : 1 (cached) 00:01:51.867 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.867 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:51.867 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:51.867 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:51.867 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:51.867 Message: lib/acl: Defining dependency "acl" 00:01:51.867 Message: lib/bbdev: Defining dependency "bbdev" 00:01:51.867 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:51.867 Run-time dependency libelf found: YES 0.186 00:01:51.867 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:01:51.867 Message: lib/bpf: Defining dependency "bpf" 00:01:51.867 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:51.867 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.867 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.867 Message: lib/distributor: Defining dependency "distributor" 00:01:51.867 Message: lib/efd: Defining dependency "efd" 00:01:51.867 Message: lib/eventdev: Defining dependency "eventdev" 00:01:51.867 Message: lib/gpudev: Defining dependency "gpudev" 00:01:51.867 Message: lib/gro: Defining dependency "gro" 00:01:51.867 Message: lib/gso: Defining dependency "gso" 00:01:51.867 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:51.867 Message: lib/jobstats: Defining dependency "jobstats" 00:01:51.867 Message: lib/latencystats: Defining dependency "latencystats" 00:01:51.867 Message: lib/lpm: Defining dependency "lpm" 00:01:51.867 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.867 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:51.867 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:51.867 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:51.867 Message: lib/member: Defining dependency "member" 00:01:51.867 Message: lib/pcapng: Defining dependency "pcapng" 00:01:51.867 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.867 Message: lib/power: Defining dependency "power" 00:01:51.867 Message: lib/rawdev: Defining dependency "rawdev" 00:01:51.867 Message: lib/regexdev: Defining dependency "regexdev" 00:01:51.867 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.867 Message: lib/rib: Defining dependency "rib" 00:01:51.867 Message: lib/reorder: Defining dependency "reorder" 00:01:51.867 Message: lib/sched: Defining dependency "sched" 00:01:51.867 Message: lib/security: Defining dependency "security" 00:01:51.867 Message: lib/stack: Defining dependency "stack" 00:01:51.867 Has header "linux/userfaultfd.h" : YES 00:01:51.867 Message: lib/vhost: Defining dependency "vhost" 00:01:51.867 Message: lib/ipsec: Defining dependency "ipsec" 00:01:51.867 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:51.867 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:51.867 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:51.867 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:51.867 Message: lib/fib: Defining dependency "fib" 00:01:51.867 Message: lib/port: Defining dependency "port" 00:01:51.867 Message: lib/pdump: Defining dependency "pdump" 00:01:51.867 Message: lib/table: Defining dependency "table" 00:01:51.867 Message: lib/pipeline: Defining dependency "pipeline" 00:01:51.867 Message: lib/graph: Defining dependency "graph" 00:01:51.867 Message: lib/node: Defining dependency "node" 00:01:51.867 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.867 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.867 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.867 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.867 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:51.867 Compiler for C supports arguments -Wno-unused-value: YES 00:01:51.867 Compiler for C supports arguments -Wno-format: YES 00:01:51.867 Compiler for C supports arguments -Wno-format-security: YES 00:01:52.805 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:52.805 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:52.805 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:52.805 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:52.805 Fetching value of define "__AVX2__" : 1 (cached) 00:01:52.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:52.805 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:52.805 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:52.805 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:52.805 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:52.805 Program doxygen found: YES (/usr/bin/doxygen) 00:01:52.805 Configuring doxy-api.conf using configuration 00:01:52.805 Program sphinx-build found: NO 00:01:52.805 Configuring rte_build_config.h using configuration 00:01:52.805 Message: 00:01:52.805 ================= 00:01:52.805 Applications Enabled 00:01:52.805 ================= 00:01:52.805 00:01:52.805 apps: 00:01:52.805 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:01:52.805 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:01:52.805 00:01:52.805 00:01:52.805 Message: 00:01:52.805 ================= 00:01:52.805 Libraries Enabled 00:01:52.805 ================= 00:01:52.805 00:01:52.805 libs: 00:01:52.805 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:52.805 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:52.805 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:52.805 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:52.805 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:52.805 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:52.805 table, pipeline, graph, node, 00:01:52.805 00:01:52.805 Message: 00:01:52.805 =============== 00:01:52.805 Drivers Enabled 00:01:52.805 =============== 00:01:52.805 00:01:52.805 common: 00:01:52.805 00:01:52.805 bus: 00:01:52.805 pci, vdev, 00:01:52.805 mempool: 00:01:52.805 ring, 00:01:52.805 dma: 00:01:52.805 00:01:52.805 net: 00:01:52.805 i40e, 00:01:52.805 raw: 00:01:52.805 00:01:52.805 crypto: 00:01:52.805 00:01:52.805 compress: 00:01:52.805 00:01:52.805 regex: 00:01:52.805 00:01:52.805 vdpa: 00:01:52.805 00:01:52.805 event: 00:01:52.805 00:01:52.805 baseband: 00:01:52.805 00:01:52.805 gpu: 00:01:52.805 00:01:52.806 00:01:52.806 Message: 00:01:52.806 ================= 00:01:52.806 Content Skipped 00:01:52.806 ================= 00:01:52.806 00:01:52.806 apps: 00:01:52.806 dumpcap: missing dependency, "libpcap" 00:01:52.806 00:01:52.806 libs: 00:01:52.806 kni: explicitly disabled via build config (deprecated lib) 00:01:52.806 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:52.806 00:01:52.806 drivers: 00:01:52.806 common/cpt: not in enabled drivers build config 00:01:52.806 common/dpaax: not in enabled drivers build config 00:01:52.806 common/iavf: not in enabled drivers build config 00:01:52.806 common/idpf: not in enabled drivers build config 00:01:52.806 common/mvep: not in enabled drivers build config 00:01:52.806 common/octeontx: not in enabled drivers build config 00:01:52.806 bus/auxiliary: not in enabled drivers build config 00:01:52.806 bus/dpaa: not in enabled drivers build config 00:01:52.806 bus/fslmc: not in enabled drivers build config 00:01:52.806 bus/ifpga: not in enabled drivers build config 00:01:52.806 bus/vmbus: not in enabled drivers build config 00:01:52.806 common/cnxk: not in enabled drivers build config 00:01:52.806 common/mlx5: not in enabled drivers build config 00:01:52.806 common/qat: not in enabled drivers build config 00:01:52.806 common/sfc_efx: not in enabled drivers build config 00:01:52.806 mempool/bucket: not in enabled drivers build config 00:01:52.806 mempool/cnxk: not in enabled drivers build config 00:01:52.806 mempool/dpaa: not in enabled drivers build config 00:01:52.806 mempool/dpaa2: not in enabled drivers build config 00:01:52.806 mempool/octeontx: not in enabled drivers build config 00:01:52.806 mempool/stack: not in enabled drivers build config 00:01:52.806 dma/cnxk: not in enabled drivers build config 00:01:52.806 dma/dpaa: not in enabled drivers build config 00:01:52.806 dma/dpaa2: not in enabled drivers build config 00:01:52.806 dma/hisilicon: not in enabled drivers build config 00:01:52.806 dma/idxd: not in enabled drivers build config 00:01:52.806 dma/ioat: not in enabled drivers build config 00:01:52.806 dma/skeleton: not in enabled drivers build config 00:01:52.806 net/af_packet: not in enabled drivers build config 00:01:52.806 net/af_xdp: not in enabled drivers build config 00:01:52.806 net/ark: not in enabled drivers build config 00:01:52.806 net/atlantic: not in enabled drivers build config 00:01:52.806 net/avp: not in enabled drivers build config 00:01:52.806 net/axgbe: not in enabled drivers build config 00:01:52.806 net/bnx2x: not in enabled drivers build config 00:01:52.806 net/bnxt: not in enabled drivers build config 00:01:52.806 net/bonding: not in enabled drivers build config 00:01:52.806 net/cnxk: not in enabled drivers build config 00:01:52.806 net/cxgbe: not in enabled drivers build config 00:01:52.806 net/dpaa: not in enabled drivers build config 00:01:52.806 net/dpaa2: not in enabled drivers build config 00:01:52.806 net/e1000: not in enabled drivers build config 00:01:52.806 net/ena: not in enabled drivers build config 00:01:52.806 net/enetc: not in enabled drivers build config 00:01:52.806 net/enetfec: not in enabled drivers build config 00:01:52.806 net/enic: not in enabled drivers build config 00:01:52.806 net/failsafe: not in enabled drivers build config 00:01:52.806 net/fm10k: not in enabled drivers build config 00:01:52.806 net/gve: not in enabled drivers build config 00:01:52.806 net/hinic: not in enabled drivers build config 00:01:52.806 net/hns3: not in enabled drivers build config 00:01:52.806 net/iavf: not in enabled drivers build config 00:01:52.806 net/ice: not in enabled drivers build config 00:01:52.806 net/idpf: not in enabled drivers build config 00:01:52.806 net/igc: not in enabled drivers build config 00:01:52.806 net/ionic: not in enabled drivers build config 00:01:52.806 net/ipn3ke: not in enabled drivers build config 00:01:52.806 net/ixgbe: not in enabled drivers build config 00:01:52.806 net/kni: not in enabled drivers build config 00:01:52.806 net/liquidio: not in enabled drivers build config 00:01:52.806 net/mana: not in enabled drivers build config 00:01:52.806 net/memif: not in enabled drivers build config 00:01:52.806 net/mlx4: not in enabled drivers build config 00:01:52.806 net/mlx5: not in enabled drivers build config 00:01:52.806 net/mvneta: not in enabled drivers build config 00:01:52.806 net/mvpp2: not in enabled drivers build config 00:01:52.806 net/netvsc: not in enabled drivers build config 00:01:52.806 net/nfb: not in enabled drivers build config 00:01:52.806 net/nfp: not in enabled drivers build config 00:01:52.806 net/ngbe: not in enabled drivers build config 00:01:52.806 net/null: not in enabled drivers build config 00:01:52.806 net/octeontx: not in enabled drivers build config 00:01:52.806 net/octeon_ep: not in enabled drivers build config 00:01:52.806 net/pcap: not in enabled drivers build config 00:01:52.806 net/pfe: not in enabled drivers build config 00:01:52.806 net/qede: not in enabled drivers build config 00:01:52.806 net/ring: not in enabled drivers build config 00:01:52.806 net/sfc: not in enabled drivers build config 00:01:52.806 net/softnic: not in enabled drivers build config 00:01:52.806 net/tap: not in enabled drivers build config 00:01:52.806 net/thunderx: not in enabled drivers build config 00:01:52.806 net/txgbe: not in enabled drivers build config 00:01:52.806 net/vdev_netvsc: not in enabled drivers build config 00:01:52.806 net/vhost: not in enabled drivers build config 00:01:52.806 net/virtio: not in enabled drivers build config 00:01:52.806 net/vmxnet3: not in enabled drivers build config 00:01:52.806 raw/cnxk_bphy: not in enabled drivers build config 00:01:52.806 raw/cnxk_gpio: not in enabled drivers build config 00:01:52.806 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:52.806 raw/ifpga: not in enabled drivers build config 00:01:52.806 raw/ntb: not in enabled drivers build config 00:01:52.806 raw/skeleton: not in enabled drivers build config 00:01:52.806 crypto/armv8: not in enabled drivers build config 00:01:52.806 crypto/bcmfs: not in enabled drivers build config 00:01:52.806 crypto/caam_jr: not in enabled drivers build config 00:01:52.806 crypto/ccp: not in enabled drivers build config 00:01:52.806 crypto/cnxk: not in enabled drivers build config 00:01:52.806 crypto/dpaa_sec: not in enabled drivers build config 00:01:52.806 crypto/dpaa2_sec: not in enabled drivers build config 00:01:52.806 crypto/ipsec_mb: not in enabled drivers build config 00:01:52.806 crypto/mlx5: not in enabled drivers build config 00:01:52.806 crypto/mvsam: not in enabled drivers build config 00:01:52.806 crypto/nitrox: not in enabled drivers build config 00:01:52.806 crypto/null: not in enabled drivers build config 00:01:52.806 crypto/octeontx: not in enabled drivers build config 00:01:52.806 crypto/openssl: not in enabled drivers build config 00:01:52.806 crypto/scheduler: not in enabled drivers build config 00:01:52.806 crypto/uadk: not in enabled drivers build config 00:01:52.806 crypto/virtio: not in enabled drivers build config 00:01:52.806 compress/isal: not in enabled drivers build config 00:01:52.806 compress/mlx5: not in enabled drivers build config 00:01:52.806 compress/octeontx: not in enabled drivers build config 00:01:52.806 compress/zlib: not in enabled drivers build config 00:01:52.806 regex/mlx5: not in enabled drivers build config 00:01:52.806 regex/cn9k: not in enabled drivers build config 00:01:52.806 vdpa/ifc: not in enabled drivers build config 00:01:52.806 vdpa/mlx5: not in enabled drivers build config 00:01:52.806 vdpa/sfc: not in enabled drivers build config 00:01:52.806 event/cnxk: not in enabled drivers build config 00:01:52.806 event/dlb2: not in enabled drivers build config 00:01:52.806 event/dpaa: not in enabled drivers build config 00:01:52.806 event/dpaa2: not in enabled drivers build config 00:01:52.806 event/dsw: not in enabled drivers build config 00:01:52.806 event/opdl: not in enabled drivers build config 00:01:52.806 event/skeleton: not in enabled drivers build config 00:01:52.806 event/sw: not in enabled drivers build config 00:01:52.806 event/octeontx: not in enabled drivers build config 00:01:52.807 baseband/acc: not in enabled drivers build config 00:01:52.807 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:52.807 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:52.807 baseband/la12xx: not in enabled drivers build config 00:01:52.807 baseband/null: not in enabled drivers build config 00:01:52.807 baseband/turbo_sw: not in enabled drivers build config 00:01:52.807 gpu/cuda: not in enabled drivers build config 00:01:52.807 00:01:52.807 00:01:52.807 Build targets in project: 313 00:01:52.807 00:01:52.807 DPDK 22.11.4 00:01:52.807 00:01:52.807 User defined options 00:01:52.807 libdir : lib 00:01:52.807 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:52.807 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:52.807 c_link_args : 00:01:52.807 enable_docs : false 00:01:52.807 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:52.807 enable_kmods : false 00:01:52.807 machine : native 00:01:52.807 tests : false 00:01:52.807 00:01:52.807 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.807 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:52.807 14:03:43 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:52.807 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:52.807 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:52.807 [2/740] Generating lib/rte_telemetry_def with a custom command 00:01:52.807 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:52.807 [4/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:52.807 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.807 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:52.807 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:52.807 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:52.807 [9/740] Linking static target lib/librte_kvargs.a 00:01:52.807 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.807 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.065 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.065 [13/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.065 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.065 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.065 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.065 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.065 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.065 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.065 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.065 [21/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.324 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:53.324 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.324 [24/740] Linking target lib/librte_kvargs.so.23.0 00:01:53.324 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.324 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.324 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.324 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.324 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.324 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.324 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.324 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.324 [33/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:53.324 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.324 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.583 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.583 [37/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.583 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:53.583 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.583 [40/740] Linking static target lib/librte_telemetry.a 00:01:53.583 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.583 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:53.583 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:53.583 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:53.583 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:53.842 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:53.842 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:53.842 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:53.842 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:53.842 [50/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.842 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:53.842 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:53.842 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:53.842 [54/740] Linking target lib/librte_telemetry.so.23.0 00:01:53.842 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:53.842 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:53.842 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:53.842 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:53.842 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:53.842 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:53.842 [61/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:53.842 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.100 [63/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:54.100 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.100 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.100 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:54.100 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.100 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.100 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.100 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.100 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.100 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.100 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.100 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:54.100 [75/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.100 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.100 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.100 [78/740] Generating lib/rte_eal_def with a custom command 00:01:54.360 [79/740] Generating lib/rte_eal_mingw with a custom command 00:01:54.360 [80/740] Generating lib/rte_ring_def with a custom command 00:01:54.360 [81/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:54.360 [82/740] Generating lib/rte_ring_mingw with a custom command 00:01:54.360 [83/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.360 [84/740] Generating lib/rte_rcu_def with a custom command 00:01:54.360 [85/740] Generating lib/rte_rcu_mingw with a custom command 00:01:54.360 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.360 [87/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.360 [88/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.360 [89/740] Linking static target lib/librte_ring.a 00:01:54.360 [90/740] Generating lib/rte_mempool_def with a custom command 00:01:54.360 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:01:54.360 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.360 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.619 [94/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.619 [95/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.619 [96/740] Generating lib/rte_mbuf_def with a custom command 00:01:54.619 [97/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:54.619 [98/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.877 [99/740] Linking static target lib/librte_eal.a 00:01:54.877 [100/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.877 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.877 [102/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:54.877 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:54.877 [104/740] Linking static target lib/librte_rcu.a 00:01:54.877 [105/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.136 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.136 [107/740] Linking static target lib/librte_mempool.a 00:01:55.136 [108/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.136 [109/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.136 [110/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.136 [111/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.136 [112/740] Generating lib/rte_net_mingw with a custom command 00:01:55.136 [113/740] Generating lib/rte_net_def with a custom command 00:01:55.136 [114/740] Generating lib/rte_meter_def with a custom command 00:01:55.136 [115/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.136 [116/740] Generating lib/rte_meter_mingw with a custom command 00:01:55.394 [117/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.394 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.394 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.394 [120/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.394 [121/740] Linking static target lib/librte_meter.a 00:01:55.653 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.653 [123/740] Linking static target lib/librte_net.a 00:01:55.653 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.653 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.653 [126/740] Linking static target lib/librte_mbuf.a 00:01:55.653 [127/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.653 [128/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.653 [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.912 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.912 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.912 [132/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.912 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.171 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.171 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.171 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.171 [137/740] Generating lib/rte_ethdev_def with a custom command 00:01:56.430 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:56.430 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.430 [140/740] Generating lib/rte_pci_def with a custom command 00:01:56.430 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.430 [142/740] Linking static target lib/librte_pci.a 00:01:56.430 [143/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.430 [144/740] Generating lib/rte_pci_mingw with a custom command 00:01:56.430 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.430 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.689 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.689 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.689 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.689 [150/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.689 [151/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.689 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.689 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.689 [154/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.689 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.689 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.689 [157/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.689 [158/740] Generating lib/rte_cmdline_def with a custom command 00:01:56.689 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:56.948 [160/740] Generating lib/rte_metrics_def with a custom command 00:01:56.948 [161/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.948 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:01:56.948 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.948 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.948 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:56.948 [166/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.948 [167/740] Generating lib/rte_hash_def with a custom command 00:01:56.948 [168/740] Generating lib/rte_hash_mingw with a custom command 00:01:56.948 [169/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.948 [170/740] Linking static target lib/librte_cmdline.a 00:01:56.948 [171/740] Generating lib/rte_timer_def with a custom command 00:01:56.948 [172/740] Generating lib/rte_timer_mingw with a custom command 00:01:57.207 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.207 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:57.207 [175/740] Linking static target lib/librte_metrics.a 00:01:57.466 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.466 [177/740] Linking static target lib/librte_timer.a 00:01:57.466 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.466 [179/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.724 [180/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.724 [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:57.983 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.983 [183/740] Linking static target lib/librte_ethdev.a 00:01:57.983 [184/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.983 [185/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:57.983 [186/740] Generating lib/rte_acl_def with a custom command 00:01:57.983 [187/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:57.983 [188/740] Generating lib/rte_acl_mingw with a custom command 00:01:57.983 [189/740] Generating lib/rte_bbdev_def with a custom command 00:01:58.242 [190/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:58.242 [191/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:58.242 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:01:58.242 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:58.242 [194/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:58.501 [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:58.501 [196/740] Linking static target lib/librte_bitratestats.a 00:01:58.501 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:58.759 [198/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:58.759 [199/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.018 [200/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:59.018 [201/740] Linking static target lib/librte_bbdev.a 00:01:59.018 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:59.277 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:59.277 [204/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.277 [205/740] Linking static target lib/librte_hash.a 00:01:59.536 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:59.536 [207/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:59.536 [208/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:59.536 [209/740] Generating lib/rte_bpf_def with a custom command 00:01:59.536 [210/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.536 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:01:59.536 [212/740] Generating lib/rte_cfgfile_def with a custom command 00:01:59.536 [213/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:59.795 [214/740] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:59.795 [215/740] Linking static target lib/acl/libavx512_tmp.a 00:01:59.795 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:59.795 [217/740] Linking static target lib/librte_cfgfile.a 00:01:59.795 [218/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.053 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:00.053 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:00.053 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:00.053 [222/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.053 [223/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.053 [224/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:00.313 [225/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.313 [226/740] Generating lib/rte_cryptodev_def with a custom command 00:02:00.313 [227/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:00.313 [228/740] Linking static target lib/librte_acl.a 00:02:00.313 [229/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:00.313 [230/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.313 [231/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.571 [232/740] Linking static target lib/librte_compressdev.a 00:02:00.571 [233/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.571 [234/740] Generating lib/rte_distributor_def with a custom command 00:02:00.571 [235/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:00.571 [236/740] Linking static target lib/librte_bpf.a 00:02:00.571 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:00.571 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.831 [239/740] Generating lib/rte_efd_def with a custom command 00:02:00.831 [240/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:00.831 [241/740] Generating lib/rte_efd_mingw with a custom command 00:02:00.831 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:00.831 [243/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.089 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:01.089 [245/740] Linking static target lib/librte_distributor.a 00:02:01.089 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:01.348 [247/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:01.348 [248/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.348 [249/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.606 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:01.606 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:01.606 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:01.866 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:01.866 [254/740] Linking static target lib/librte_efd.a 00:02:01.866 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:01.866 [256/740] Generating lib/rte_gpudev_def with a custom command 00:02:01.866 [257/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:02.124 [258/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.124 [259/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.124 [260/740] Linking static target lib/librte_cryptodev.a 00:02:02.383 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:02.383 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:02.383 [263/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:02.383 [264/740] Linking static target lib/librte_gpudev.a 00:02:02.383 [265/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:02.383 [266/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:02.642 [267/740] Generating lib/rte_gro_def with a custom command 00:02:02.642 [268/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:02.642 [269/740] Generating lib/rte_gro_mingw with a custom command 00:02:02.642 [270/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.642 [271/740] Linking target lib/librte_eal.so.23.0 00:02:02.642 [272/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.901 [273/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:02.901 [274/740] Linking target lib/librte_ring.so.23.0 00:02:02.901 [275/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:02.901 [276/740] Linking target lib/librte_meter.so.23.0 00:02:02.901 [277/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:02.901 [278/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:02.901 [279/740] Linking target lib/librte_pci.so.23.0 00:02:02.901 [280/740] Linking target lib/librte_rcu.so.23.0 00:02:02.901 [281/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:02.901 [282/740] Linking target lib/librte_mempool.so.23.0 00:02:02.901 [283/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:03.160 [284/740] Linking target lib/librte_timer.so.23.0 00:02:03.160 [285/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:03.160 [286/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:03.160 [287/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:03.160 [288/740] Linking static target lib/librte_gro.a 00:02:03.160 [289/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:03.160 [290/740] Linking target lib/librte_cfgfile.so.23.0 00:02:03.160 [291/740] Linking target lib/librte_acl.so.23.0 00:02:03.160 [292/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:03.160 [293/740] Linking static target lib/librte_eventdev.a 00:02:03.160 [294/740] Linking target lib/librte_mbuf.so.23.0 00:02:03.160 [295/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.160 [296/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:03.160 [297/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:03.160 [298/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:03.160 [299/740] Generating lib/rte_gso_def with a custom command 00:02:03.160 [300/740] Generating lib/rte_gso_mingw with a custom command 00:02:03.160 [301/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:03.160 [302/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:03.160 [303/740] Linking target lib/librte_net.so.23.0 00:02:03.160 [304/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.419 [305/740] Linking target lib/librte_bbdev.so.23.0 00:02:03.419 [306/740] Linking target lib/librte_compressdev.so.23.0 00:02:03.419 [307/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:03.419 [308/740] Linking target lib/librte_distributor.so.23.0 00:02:03.419 [309/740] Linking target lib/librte_ethdev.so.23.0 00:02:03.419 [310/740] Linking target lib/librte_cmdline.so.23.0 00:02:03.419 [311/740] Linking target lib/librte_hash.so.23.0 00:02:03.419 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:03.419 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:03.419 [314/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:03.678 [315/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:03.678 [316/740] Linking target lib/librte_gpudev.so.23.0 00:02:03.678 [317/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:03.678 [318/740] Linking target lib/librte_metrics.so.23.0 00:02:03.678 [319/740] Linking target lib/librte_gro.so.23.0 00:02:03.678 [320/740] Linking static target lib/librte_gso.a 00:02:03.678 [321/740] Linking target lib/librte_efd.so.23.0 00:02:03.678 [322/740] Linking target lib/librte_bpf.so.23.0 00:02:03.678 [323/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:03.678 [324/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:03.678 [325/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:03.678 [326/740] Linking target lib/librte_bitratestats.so.23.0 00:02:03.678 [327/740] Generating lib/rte_ip_frag_def with a custom command 00:02:03.678 [328/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:03.678 [329/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.678 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:03.678 [331/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:03.937 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:03.937 [333/740] Linking target lib/librte_gso.so.23.0 00:02:03.937 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:03.937 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:03.937 [336/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:03.937 [337/740] Linking static target lib/librte_jobstats.a 00:02:03.937 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:03.937 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:03.937 [340/740] Generating lib/rte_lpm_mingw with a custom command 00:02:03.937 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:04.196 [342/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.196 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:04.196 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:04.196 [345/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.196 [346/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:04.196 [347/740] Linking static target lib/librte_latencystats.a 00:02:04.196 [348/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:04.455 [349/740] Linking static target lib/librte_ip_frag.a 00:02:04.455 [350/740] Linking target lib/librte_cryptodev.so.23.0 00:02:04.455 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:04.455 [352/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:04.455 [353/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:04.455 [354/740] Generating lib/rte_member_def with a custom command 00:02:04.455 [355/740] Generating lib/rte_member_mingw with a custom command 00:02:04.455 [356/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:04.455 [357/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:04.455 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:04.455 [359/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:04.455 [360/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.455 [361/740] Linking target lib/librte_latencystats.so.23.0 00:02:04.713 [362/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:04.713 [363/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:04.713 [364/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.713 [365/740] Linking target lib/librte_ip_frag.so.23.0 00:02:04.713 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.713 [367/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:04.713 [368/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:04.972 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:04.972 [370/740] Linking static target lib/librte_lpm.a 00:02:04.972 [371/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:04.972 [372/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:04.972 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:04.972 [374/740] Generating lib/rte_power_def with a custom command 00:02:04.972 [375/740] Generating lib/rte_power_mingw with a custom command 00:02:04.972 [376/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:04.972 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:05.230 [378/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:05.230 [379/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:05.230 [380/740] Linking static target lib/librte_pcapng.a 00:02:05.230 [381/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.230 [382/740] Generating lib/rte_regexdev_def with a custom command 00:02:05.230 [383/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:05.230 [384/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.230 [385/740] Linking target lib/librte_lpm.so.23.0 00:02:05.230 [386/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.230 [387/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:05.230 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.230 [389/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:05.230 [390/740] Generating lib/rte_dmadev_def with a custom command 00:02:05.230 [391/740] Linking static target lib/librte_rawdev.a 00:02:05.230 [392/740] Linking target lib/librte_eventdev.so.23.0 00:02:05.230 [393/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:05.488 [394/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:05.488 [395/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.488 [396/740] Generating lib/rte_rib_def with a custom command 00:02:05.488 [397/740] Generating lib/rte_rib_mingw with a custom command 00:02:05.489 [398/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:05.489 [399/740] Linking target lib/librte_pcapng.so.23.0 00:02:05.489 [400/740] Generating lib/rte_reorder_def with a custom command 00:02:05.489 [401/740] Generating lib/rte_reorder_mingw with a custom command 00:02:05.489 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.489 [403/740] Linking static target lib/librte_power.a 00:02:05.489 [404/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:05.489 [405/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.489 [406/740] Linking static target lib/librte_dmadev.a 00:02:05.746 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:05.746 [408/740] Linking static target lib/librte_regexdev.a 00:02:05.746 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:05.746 [410/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:05.746 [411/740] Linking static target lib/librte_member.a 00:02:05.746 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:05.746 [413/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:05.746 [414/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.746 [415/740] Generating lib/rte_sched_def with a custom command 00:02:06.005 [416/740] Linking target lib/librte_rawdev.so.23.0 00:02:06.005 [417/740] Generating lib/rte_sched_mingw with a custom command 00:02:06.005 [418/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:06.005 [419/740] Generating lib/rte_security_def with a custom command 00:02:06.005 [420/740] Generating lib/rte_security_mingw with a custom command 00:02:06.005 [421/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:06.005 [422/740] Linking static target lib/librte_reorder.a 00:02:06.005 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:06.005 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:06.005 [425/740] Generating lib/rte_stack_def with a custom command 00:02:06.005 [426/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.005 [427/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.005 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:06.005 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:06.005 [430/740] Linking target lib/librte_member.so.23.0 00:02:06.005 [431/740] Linking static target lib/librte_stack.a 00:02:06.005 [432/740] Linking target lib/librte_dmadev.so.23.0 00:02:06.263 [433/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.263 [434/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:06.263 [435/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:06.263 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:06.263 [437/740] Linking static target lib/librte_rib.a 00:02:06.263 [438/740] Linking target lib/librte_reorder.so.23.0 00:02:06.263 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.263 [440/740] Linking target lib/librte_stack.so.23.0 00:02:06.263 [441/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.263 [442/740] Linking target lib/librte_regexdev.so.23.0 00:02:06.522 [443/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:06.522 [444/740] Linking static target lib/librte_security.a 00:02:06.522 [445/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.522 [446/740] Linking target lib/librte_power.so.23.0 00:02:06.522 [447/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:06.781 [448/740] Generating lib/rte_vhost_def with a custom command 00:02:06.781 [449/740] Generating lib/rte_vhost_mingw with a custom command 00:02:06.781 [450/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.781 [451/740] Linking target lib/librte_rib.so.23.0 00:02:06.781 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:06.781 [453/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:07.039 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:07.040 [455/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:07.040 [456/740] Linking static target lib/librte_sched.a 00:02:07.040 [457/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.040 [458/740] Linking target lib/librte_security.so.23.0 00:02:07.040 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:07.298 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:07.298 [461/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.298 [462/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:07.298 [463/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.596 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:07.596 [465/740] Generating lib/rte_ipsec_def with a custom command 00:02:07.596 [466/740] Linking target lib/librte_sched.so.23.0 00:02:07.596 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:07.596 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:07.870 [469/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:07.870 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:07.870 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:07.870 [472/740] Generating lib/rte_fib_def with a custom command 00:02:07.870 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:07.870 [474/740] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:07.870 [475/740] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:07.870 [476/740] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:07.871 [477/740] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:08.130 [478/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:08.130 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:08.130 [480/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:08.130 [481/740] Linking static target lib/librte_ipsec.a 00:02:08.388 [482/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.646 [483/740] Linking target lib/librte_ipsec.so.23.0 00:02:08.646 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:08.646 [485/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:08.646 [486/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:08.646 [487/740] Linking static target lib/librte_fib.a 00:02:08.646 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:08.646 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:08.646 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:08.905 [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.905 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:08.905 [493/740] Linking target lib/librte_fib.so.23.0 00:02:09.162 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:09.162 [495/740] Generating lib/rte_port_def with a custom command 00:02:09.162 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:09.421 [497/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:09.421 [498/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:09.421 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:09.421 [500/740] Generating lib/rte_pdump_def with a custom command 00:02:09.421 [501/740] Generating lib/rte_pdump_mingw with a custom command 00:02:09.421 [502/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:09.421 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:09.421 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:09.679 [505/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:09.679 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:09.679 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:09.679 [508/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:09.938 [509/740] Linking static target lib/librte_port.a 00:02:09.938 [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:09.938 [511/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:09.938 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:09.938 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:10.197 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:10.197 [515/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:10.197 [516/740] Linking static target lib/librte_pdump.a 00:02:10.455 [517/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.456 [518/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.456 [519/740] Linking target lib/librte_port.so.23.0 00:02:10.456 [520/740] Linking target lib/librte_pdump.so.23.0 00:02:10.456 [521/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:10.715 [522/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:10.715 [523/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:10.715 [524/740] Generating lib/rte_table_def with a custom command 00:02:10.715 [525/740] Generating lib/rte_table_mingw with a custom command 00:02:10.715 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:10.715 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:10.974 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:10.974 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:10.974 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:10.974 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:10.974 [532/740] Linking static target lib/librte_table.a 00:02:10.974 [533/740] Generating lib/rte_pipeline_def with a custom command 00:02:10.974 [534/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:11.234 [535/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:11.494 [536/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:11.494 [537/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:11.494 [538/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:11.494 [539/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.755 [540/740] Linking target lib/librte_table.so.23.0 00:02:11.755 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:11.755 [542/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:11.755 [543/740] Generating lib/rte_graph_def with a custom command 00:02:11.755 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:11.755 [545/740] Generating lib/rte_graph_mingw with a custom command 00:02:11.755 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:12.014 [547/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:12.014 [548/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:12.014 [549/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:12.014 [550/740] Linking static target lib/librte_graph.a 00:02:12.273 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:12.273 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:12.273 [553/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:12.273 [554/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:12.535 [555/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:12.795 [556/740] Generating lib/rte_node_def with a custom command 00:02:12.795 [557/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:12.795 [558/740] Generating lib/rte_node_mingw with a custom command 00:02:12.795 [559/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:12.795 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.054 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.054 [562/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:13.054 [563/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.054 [564/740] Linking target lib/librte_graph.so.23.0 00:02:13.054 [565/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.054 [566/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:13.054 [567/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:13.054 [568/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:13.054 [569/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:13.054 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:13.054 [571/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:13.054 [572/740] Linking static target lib/librte_node.a 00:02:13.054 [573/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.054 [574/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:13.054 [575/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.312 [576/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:13.312 [577/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:13.312 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.312 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.312 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.312 [581/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.312 [582/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.312 [583/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.572 [584/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.572 [585/740] Linking target lib/librte_node.so.23.0 00:02:13.572 [586/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.572 [587/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.572 [588/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.572 [589/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.572 [590/740] Linking static target drivers/librte_bus_vdev.a 00:02:13.572 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.572 [592/740] Linking static target drivers/librte_bus_pci.a 00:02:13.831 [593/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.831 [594/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:13.831 [595/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:13.831 [596/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.831 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:13.831 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:13.831 [599/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:13.831 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:14.090 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:14.090 [602/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.090 [603/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.090 [604/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.090 [605/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.090 [606/740] Linking static target drivers/librte_mempool_ring.a 00:02:14.090 [607/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.349 [608/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:14.349 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:14.608 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:14.867 [611/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:15.126 [612/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:15.126 [613/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:15.126 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:15.385 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:15.385 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:15.644 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:15.903 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:15.903 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:15.903 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:15.903 [621/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:15.903 [622/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:16.162 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:16.162 [624/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:16.731 [625/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:16.990 [626/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:16.990 [627/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:16.990 [628/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:17.249 [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:17.249 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:17.249 [631/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:17.249 [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:17.249 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:17.507 [634/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:17.766 [635/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:17.766 [636/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:17.766 [637/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:17.766 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:18.024 [639/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:18.024 [640/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:18.283 [641/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:18.283 [642/740] Linking static target drivers/librte_net_i40e.a 00:02:18.283 [643/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:18.283 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:18.283 [645/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:18.283 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:18.283 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:18.541 [648/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:18.541 [649/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.541 [650/740] Linking static target lib/librte_vhost.a 00:02:18.800 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:18.800 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:19.058 [653/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.058 [654/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:19.058 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:19.317 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:19.317 [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:19.317 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:19.317 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:19.317 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:19.317 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:19.575 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:19.575 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:19.847 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:19.847 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:19.847 [666/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:20.117 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:20.117 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:20.117 [669/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.117 [670/740] Linking target lib/librte_vhost.so.23.0 00:02:20.118 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:20.683 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:20.683 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:20.941 [674/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:20.941 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:20.941 [676/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:21.200 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:21.200 [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:21.200 [679/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:21.200 [680/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:21.458 [681/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:21.458 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:21.458 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:21.716 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:21.716 [685/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:21.716 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:21.716 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:21.975 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:21.975 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:21.975 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:22.233 [691/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:22.491 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:22.491 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:22.491 [694/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:22.491 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:22.749 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:22.749 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:23.316 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:23.316 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:23.316 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:23.316 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:23.576 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:23.834 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:23.834 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:23.834 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:24.093 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:24.093 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:24.093 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:24.661 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:24.661 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:24.661 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:24.920 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:24.920 [713/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:24.920 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:25.178 [715/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:25.178 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:25.178 [717/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:25.178 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:25.178 [719/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:25.178 [720/740] Linking static target lib/librte_pipeline.a 00:02:25.746 [721/740] Linking target app/dpdk-test-cmdline 00:02:25.746 [722/740] Linking target app/dpdk-test-crypto-perf 00:02:25.746 [723/740] Linking target app/dpdk-pdump 00:02:25.746 [724/740] Linking target app/dpdk-test-bbdev 00:02:25.746 [725/740] Linking target app/dpdk-test-acl 00:02:25.746 [726/740] Linking target app/dpdk-proc-info 00:02:25.746 [727/740] Linking target app/dpdk-test-compress-perf 00:02:25.746 [728/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:26.005 [729/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:26.264 [730/740] Linking target app/dpdk-test-eventdev 00:02:26.264 [731/740] Linking target app/dpdk-test-fib 00:02:26.264 [732/740] Linking target app/dpdk-test-gpudev 00:02:26.264 [733/740] Linking target app/dpdk-test-flow-perf 00:02:26.264 [734/740] Linking target app/dpdk-test-regex 00:02:26.264 [735/740] Linking target app/dpdk-test-pipeline 00:02:26.264 [736/740] Linking target app/dpdk-test-security-perf 00:02:26.264 [737/740] Linking target app/dpdk-test-sad 00:02:26.522 [738/740] Linking target app/dpdk-testpmd 00:02:29.056 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.056 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:29.056 14:04:20 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:29.056 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:29.056 [0/1] Installing files. 00:02:29.627 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.627 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.628 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.629 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.630 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.630 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:29.630 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.630 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.889 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:29.890 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:29.890 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:29.890 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:29.890 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:29.890 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:29.890 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.151 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.152 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.152 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.152 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.152 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.152 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.153 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.154 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.155 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.155 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:30.155 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:30.155 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:30.155 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:30.155 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:30.155 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:30.155 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:30.155 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:30.155 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:30.155 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:30.155 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:30.155 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:30.155 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:30.155 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:30.155 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:30.155 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:30.155 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:30.155 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:30.155 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:30.155 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:30.155 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:30.155 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:30.155 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:30.155 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:30.155 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:30.155 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:30.155 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:30.155 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:30.155 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:30.155 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:30.155 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:30.155 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:30.155 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:30.155 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:30.155 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:30.155 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:30.155 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:30.155 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:30.155 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:30.155 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:30.155 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:30.155 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:30.155 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:30.155 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:30.155 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:30.155 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:30.155 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:30.155 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:30.155 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:30.155 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:30.155 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:30.155 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:30.155 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:30.155 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:30.155 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:30.155 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:30.155 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:30.155 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:30.155 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:30.155 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:30.155 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:30.155 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:30.155 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:30.155 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:30.155 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:30.155 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:30.155 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:30.155 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:30.155 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:30.155 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:30.155 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:30.155 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:30.155 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:30.155 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:30.155 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:30.155 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:30.155 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:30.156 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:30.156 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:30.156 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:30.156 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:30.156 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:30.156 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:30.156 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:30.156 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:30.156 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:30.156 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:30.156 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:30.156 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:30.156 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:30.156 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:30.156 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:30.156 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:30.156 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:30.156 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:30.156 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:30.156 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:30.156 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:30.156 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:30.156 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:30.156 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:30.156 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:30.156 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:30.156 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:30.156 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:30.156 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:30.156 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:30.156 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:30.156 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:30.156 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:30.156 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:30.156 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:30.156 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:30.156 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:30.156 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:30.156 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:30.156 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:30.156 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:30.156 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:30.156 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:30.156 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:30.156 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:30.156 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:30.156 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:30.156 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:30.156 14:04:22 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:30.156 14:04:22 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:30.156 14:04:22 -- common/autobuild_common.sh@203 -- $ cat 00:02:30.156 14:04:22 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.156 00:02:30.156 real 0m44.333s 00:02:30.156 user 4m47.252s 00:02:30.156 sys 0m46.710s 00:02:30.156 14:04:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:30.156 14:04:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.156 ************************************ 00:02:30.156 END TEST build_native_dpdk 00:02:30.156 ************************************ 00:02:30.156 14:04:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.156 14:04:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.156 14:04:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.156 14:04:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.156 14:04:22 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:30.156 14:04:22 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:30.156 14:04:22 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:02:30.156 14:04:22 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:30.156 14:04:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:30.156 14:04:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.156 ************************************ 00:02:30.156 START TEST unittest_build 00:02:30.156 ************************************ 00:02:30.156 14:04:22 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:02:30.156 14:04:22 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:30.415 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:30.415 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.415 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:30.415 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:30.674 Using 'verbs' RDMA provider 00:02:46.131 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:58.333 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:58.333 Creating mk/config.mk...done. 00:02:58.333 Creating mk/cc.flags.mk...done. 00:02:58.333 Type 'make' to build. 00:02:58.333 14:04:49 -- common/autobuild_common.sh@408 -- $ make -j10 00:02:58.333 make[1]: Nothing to be done for 'all'. 00:03:16.422 CC lib/ut/ut.o 00:03:16.422 CC lib/log/log.o 00:03:16.422 CC lib/log/log_flags.o 00:03:16.422 CC lib/log/log_deprecated.o 00:03:16.422 CC lib/ut_mock/mock.o 00:03:16.422 LIB libspdk_ut_mock.a 00:03:16.422 LIB libspdk_log.a 00:03:16.422 LIB libspdk_ut.a 00:03:16.422 CXX lib/trace_parser/trace.o 00:03:16.422 CC lib/util/base64.o 00:03:16.422 CC lib/util/bit_array.o 00:03:16.422 CC lib/dma/dma.o 00:03:16.422 CC lib/util/cpuset.o 00:03:16.422 CC lib/ioat/ioat.o 00:03:16.422 CC lib/util/crc16.o 00:03:16.422 CC lib/util/crc32.o 00:03:16.422 CC lib/util/crc32c.o 00:03:16.422 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.422 CC lib/util/crc32_ieee.o 00:03:16.422 CC lib/util/crc64.o 00:03:16.422 CC lib/vfio_user/host/vfio_user.o 00:03:16.422 CC lib/util/dif.o 00:03:16.422 LIB libspdk_dma.a 00:03:16.422 CC lib/util/fd.o 00:03:16.422 CC lib/util/file.o 00:03:16.422 CC lib/util/hexlify.o 00:03:16.422 CC lib/util/iov.o 00:03:16.422 CC lib/util/math.o 00:03:16.422 CC lib/util/pipe.o 00:03:16.422 LIB libspdk_ioat.a 00:03:16.422 CC lib/util/strerror_tls.o 00:03:16.422 LIB libspdk_vfio_user.a 00:03:16.422 CC lib/util/string.o 00:03:16.422 CC lib/util/uuid.o 00:03:16.422 CC lib/util/fd_group.o 00:03:16.422 CC lib/util/xor.o 00:03:16.422 CC lib/util/zipf.o 00:03:16.422 LIB libspdk_util.a 00:03:16.422 CC lib/rdma/common.o 00:03:16.422 CC lib/rdma/rdma_verbs.o 00:03:16.422 CC lib/json/json_parse.o 00:03:16.422 CC lib/json/json_util.o 00:03:16.422 CC lib/json/json_write.o 00:03:16.422 CC lib/idxd/idxd.o 00:03:16.422 CC lib/env_dpdk/env.o 00:03:16.422 CC lib/conf/conf.o 00:03:16.422 CC lib/vmd/vmd.o 00:03:16.422 LIB libspdk_trace_parser.a 00:03:16.422 CC lib/vmd/led.o 00:03:16.681 CC lib/idxd/idxd_user.o 00:03:16.681 CC lib/env_dpdk/memory.o 00:03:16.681 LIB libspdk_conf.a 00:03:16.681 CC lib/env_dpdk/pci.o 00:03:16.681 CC lib/env_dpdk/init.o 00:03:16.681 LIB libspdk_rdma.a 00:03:16.681 LIB libspdk_json.a 00:03:16.681 CC lib/env_dpdk/threads.o 00:03:16.681 CC lib/env_dpdk/pci_ioat.o 00:03:16.681 CC lib/env_dpdk/pci_virtio.o 00:03:16.939 CC lib/env_dpdk/pci_vmd.o 00:03:16.939 CC lib/env_dpdk/pci_idxd.o 00:03:16.939 CC lib/env_dpdk/pci_event.o 00:03:16.939 CC lib/env_dpdk/sigbus_handler.o 00:03:16.939 CC lib/env_dpdk/pci_dpdk.o 00:03:16.939 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.939 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.939 LIB libspdk_idxd.a 00:03:17.198 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.198 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.198 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.198 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.198 LIB libspdk_vmd.a 00:03:17.456 LIB libspdk_jsonrpc.a 00:03:17.456 CC lib/rpc/rpc.o 00:03:17.714 LIB libspdk_rpc.a 00:03:17.714 CC lib/notify/notify_rpc.o 00:03:17.714 CC lib/notify/notify.o 00:03:17.714 CC lib/sock/sock.o 00:03:17.714 CC lib/sock/sock_rpc.o 00:03:17.714 CC lib/trace/trace.o 00:03:17.714 CC lib/trace/trace_flags.o 00:03:17.714 CC lib/trace/trace_rpc.o 00:03:17.973 LIB libspdk_env_dpdk.a 00:03:17.973 LIB libspdk_notify.a 00:03:17.973 LIB libspdk_trace.a 00:03:18.232 LIB libspdk_sock.a 00:03:18.232 CC lib/thread/iobuf.o 00:03:18.232 CC lib/thread/thread.o 00:03:18.491 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.491 CC lib/nvme/nvme_ctrlr.o 00:03:18.491 CC lib/nvme/nvme_fabric.o 00:03:18.491 CC lib/nvme/nvme_ns_cmd.o 00:03:18.491 CC lib/nvme/nvme_ns.o 00:03:18.491 CC lib/nvme/nvme_pcie_common.o 00:03:18.491 CC lib/nvme/nvme_pcie.o 00:03:18.491 CC lib/nvme/nvme_qpair.o 00:03:18.491 CC lib/nvme/nvme.o 00:03:19.058 CC lib/nvme/nvme_quirks.o 00:03:19.058 CC lib/nvme/nvme_transport.o 00:03:19.058 CC lib/nvme/nvme_discovery.o 00:03:19.058 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.058 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.323 CC lib/nvme/nvme_tcp.o 00:03:19.323 CC lib/nvme/nvme_opal.o 00:03:19.323 CC lib/nvme/nvme_io_msg.o 00:03:19.323 CC lib/nvme/nvme_poll_group.o 00:03:19.323 CC lib/nvme/nvme_zns.o 00:03:19.323 CC lib/nvme/nvme_cuse.o 00:03:19.614 CC lib/nvme/nvme_vfio_user.o 00:03:19.614 CC lib/nvme/nvme_rdma.o 00:03:19.884 LIB libspdk_thread.a 00:03:19.884 CC lib/blob/blobstore.o 00:03:19.884 CC lib/init/json_config.o 00:03:19.884 CC lib/accel/accel.o 00:03:19.884 CC lib/accel/accel_rpc.o 00:03:19.884 CC lib/virtio/virtio.o 00:03:20.142 CC lib/virtio/virtio_vhost_user.o 00:03:20.142 CC lib/init/subsystem.o 00:03:20.142 CC lib/init/subsystem_rpc.o 00:03:20.142 CC lib/init/rpc.o 00:03:20.142 CC lib/virtio/virtio_vfio_user.o 00:03:20.400 CC lib/accel/accel_sw.o 00:03:20.400 CC lib/blob/request.o 00:03:20.400 LIB libspdk_init.a 00:03:20.400 CC lib/blob/zeroes.o 00:03:20.400 CC lib/blob/blob_bs_dev.o 00:03:20.400 CC lib/virtio/virtio_pci.o 00:03:20.659 CC lib/event/app.o 00:03:20.659 CC lib/event/reactor.o 00:03:20.659 CC lib/event/log_rpc.o 00:03:20.659 CC lib/event/app_rpc.o 00:03:20.659 CC lib/event/scheduler_static.o 00:03:20.918 LIB libspdk_virtio.a 00:03:20.918 LIB libspdk_nvme.a 00:03:20.918 LIB libspdk_accel.a 00:03:21.176 LIB libspdk_event.a 00:03:21.176 CC lib/bdev/bdev.o 00:03:21.176 CC lib/bdev/bdev_zone.o 00:03:21.176 CC lib/bdev/bdev_rpc.o 00:03:21.177 CC lib/bdev/part.o 00:03:21.177 CC lib/bdev/scsi_nvme.o 00:03:23.080 LIB libspdk_blob.a 00:03:23.080 CC lib/lvol/lvol.o 00:03:23.080 CC lib/blobfs/blobfs.o 00:03:23.080 CC lib/blobfs/tree.o 00:03:23.648 LIB libspdk_bdev.a 00:03:23.907 CC lib/scsi/dev.o 00:03:23.907 CC lib/scsi/lun.o 00:03:23.907 CC lib/nbd/nbd.o 00:03:23.907 CC lib/scsi/port.o 00:03:23.907 CC lib/nbd/nbd_rpc.o 00:03:23.907 CC lib/scsi/scsi.o 00:03:23.907 CC lib/nvmf/ctrlr.o 00:03:23.907 CC lib/ftl/ftl_core.o 00:03:23.907 LIB libspdk_blobfs.a 00:03:23.907 CC lib/ftl/ftl_init.o 00:03:24.166 CC lib/nvmf/ctrlr_discovery.o 00:03:24.166 CC lib/scsi/scsi_bdev.o 00:03:24.166 CC lib/scsi/scsi_pr.o 00:03:24.166 LIB libspdk_lvol.a 00:03:24.166 CC lib/scsi/scsi_rpc.o 00:03:24.166 CC lib/scsi/task.o 00:03:24.166 CC lib/ftl/ftl_layout.o 00:03:24.166 CC lib/nvmf/ctrlr_bdev.o 00:03:24.166 CC lib/nvmf/subsystem.o 00:03:24.425 LIB libspdk_nbd.a 00:03:24.425 CC lib/nvmf/nvmf.o 00:03:24.425 CC lib/nvmf/nvmf_rpc.o 00:03:24.425 CC lib/nvmf/transport.o 00:03:24.425 CC lib/nvmf/tcp.o 00:03:24.684 CC lib/ftl/ftl_debug.o 00:03:24.684 CC lib/nvmf/rdma.o 00:03:24.684 LIB libspdk_scsi.a 00:03:24.684 CC lib/ftl/ftl_io.o 00:03:24.943 CC lib/ftl/ftl_sb.o 00:03:24.943 CC lib/ftl/ftl_l2p.o 00:03:24.943 CC lib/ftl/ftl_l2p_flat.o 00:03:24.943 CC lib/ftl/ftl_nv_cache.o 00:03:25.201 CC lib/ftl/ftl_band.o 00:03:25.201 CC lib/ftl/ftl_band_ops.o 00:03:25.201 CC lib/ftl/ftl_writer.o 00:03:25.201 CC lib/ftl/ftl_rq.o 00:03:25.460 CC lib/ftl/ftl_reloc.o 00:03:25.460 CC lib/ftl/ftl_l2p_cache.o 00:03:25.460 CC lib/ftl/ftl_p2l.o 00:03:25.460 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.460 CC lib/iscsi/conn.o 00:03:25.460 CC lib/iscsi/init_grp.o 00:03:25.719 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:25.719 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.719 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.719 CC lib/iscsi/iscsi.o 00:03:25.978 CC lib/iscsi/md5.o 00:03:25.978 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.978 CC lib/iscsi/param.o 00:03:25.978 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.978 CC lib/vhost/vhost.o 00:03:25.978 CC lib/iscsi/portal_grp.o 00:03:25.978 CC lib/iscsi/tgt_node.o 00:03:26.238 CC lib/iscsi/iscsi_subsystem.o 00:03:26.238 CC lib/iscsi/iscsi_rpc.o 00:03:26.238 CC lib/iscsi/task.o 00:03:26.238 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.238 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.238 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.496 CC lib/vhost/vhost_rpc.o 00:03:26.496 CC lib/vhost/vhost_scsi.o 00:03:26.496 CC lib/vhost/vhost_blk.o 00:03:26.496 CC lib/vhost/rte_vhost_user.o 00:03:26.496 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.754 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.754 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.754 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.754 CC lib/ftl/utils/ftl_conf.o 00:03:26.754 CC lib/ftl/utils/ftl_md.o 00:03:26.754 LIB libspdk_nvmf.a 00:03:27.013 CC lib/ftl/utils/ftl_mempool.o 00:03:27.013 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.013 CC lib/ftl/utils/ftl_property.o 00:03:27.013 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.013 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:27.013 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.272 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.272 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.272 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.272 LIB libspdk_iscsi.a 00:03:27.272 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.272 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.272 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.272 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.272 CC lib/ftl/base/ftl_base_dev.o 00:03:27.272 CC lib/ftl/base/ftl_base_bdev.o 00:03:27.272 CC lib/ftl/ftl_trace.o 00:03:27.530 LIB libspdk_vhost.a 00:03:27.789 LIB libspdk_ftl.a 00:03:28.047 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.047 CC module/sock/posix/posix.o 00:03:28.047 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.047 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.047 CC module/blob/bdev/blob_bdev.o 00:03:28.047 CC module/accel/error/accel_error.o 00:03:28.047 CC module/accel/iaa/accel_iaa.o 00:03:28.047 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.047 CC module/accel/ioat/accel_ioat.o 00:03:28.047 CC module/accel/dsa/accel_dsa.o 00:03:28.047 LIB libspdk_env_dpdk_rpc.a 00:03:28.306 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.306 LIB libspdk_scheduler_gscheduler.a 00:03:28.306 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.306 LIB libspdk_scheduler_dynamic.a 00:03:28.306 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.306 CC module/accel/error/accel_error_rpc.o 00:03:28.306 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.306 LIB libspdk_accel_dsa.a 00:03:28.306 LIB libspdk_blob_bdev.a 00:03:28.306 LIB libspdk_accel_ioat.a 00:03:28.306 LIB libspdk_accel_error.a 00:03:28.306 LIB libspdk_accel_iaa.a 00:03:28.565 CC module/bdev/delay/vbdev_delay.o 00:03:28.565 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.565 CC module/bdev/error/vbdev_error.o 00:03:28.565 CC module/bdev/gpt/gpt.o 00:03:28.565 CC module/bdev/malloc/bdev_malloc.o 00:03:28.565 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.565 CC module/bdev/null/bdev_null.o 00:03:28.565 CC module/bdev/nvme/bdev_nvme.o 00:03:28.565 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.565 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.565 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.823 CC module/bdev/null/bdev_null_rpc.o 00:03:28.823 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.823 LIB libspdk_sock_posix.a 00:03:28.823 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.823 LIB libspdk_blobfs_bdev.a 00:03:28.823 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.823 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.823 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.823 CC module/bdev/nvme/nvme_rpc.o 00:03:29.082 LIB libspdk_bdev_gpt.a 00:03:29.082 LIB libspdk_bdev_error.a 00:03:29.082 LIB libspdk_bdev_null.a 00:03:29.082 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.082 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.082 CC module/bdev/nvme/vbdev_opal.o 00:03:29.082 LIB libspdk_bdev_passthru.a 00:03:29.082 LIB libspdk_bdev_malloc.a 00:03:29.082 LIB libspdk_bdev_delay.a 00:03:29.082 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.082 CC module/bdev/raid/bdev_raid.o 00:03:29.082 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.082 CC module/bdev/split/vbdev_split.o 00:03:29.082 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.340 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.340 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.340 LIB libspdk_bdev_lvol.a 00:03:29.340 CC module/bdev/aio/bdev_aio.o 00:03:29.340 LIB libspdk_bdev_split.a 00:03:29.340 CC module/bdev/ftl/bdev_ftl.o 00:03:29.340 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.340 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.340 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.598 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.598 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.598 LIB libspdk_bdev_zone_block.a 00:03:29.598 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.598 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.598 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.598 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.856 LIB libspdk_bdev_aio.a 00:03:29.856 LIB libspdk_bdev_iscsi.a 00:03:29.856 CC module/bdev/raid/raid0.o 00:03:29.856 CC module/bdev/raid/raid1.o 00:03:29.856 CC module/bdev/raid/concat.o 00:03:29.856 CC module/bdev/raid/raid5f.o 00:03:29.856 LIB libspdk_bdev_ftl.a 00:03:30.115 LIB libspdk_bdev_virtio.a 00:03:30.374 LIB libspdk_bdev_raid.a 00:03:30.942 LIB libspdk_bdev_nvme.a 00:03:31.201 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.201 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.201 CC module/event/subsystems/vmd/vmd.o 00:03:31.201 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.201 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.201 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.201 CC module/event/subsystems/sock/sock.o 00:03:31.459 LIB libspdk_event_sock.a 00:03:31.459 LIB libspdk_event_vhost_blk.a 00:03:31.459 LIB libspdk_event_scheduler.a 00:03:31.459 LIB libspdk_event_vmd.a 00:03:31.459 LIB libspdk_event_iobuf.a 00:03:31.717 CC module/event/subsystems/accel/accel.o 00:03:31.717 LIB libspdk_event_accel.a 00:03:31.976 CC module/event/subsystems/bdev/bdev.o 00:03:32.235 LIB libspdk_event_bdev.a 00:03:32.235 CC module/event/subsystems/scsi/scsi.o 00:03:32.235 CC module/event/subsystems/nbd/nbd.o 00:03:32.235 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.235 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.494 LIB libspdk_event_nbd.a 00:03:32.494 LIB libspdk_event_scsi.a 00:03:32.494 LIB libspdk_event_nvmf.a 00:03:32.752 CC module/event/subsystems/iscsi/iscsi.o 00:03:32.752 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:32.752 LIB libspdk_event_vhost_scsi.a 00:03:32.752 LIB libspdk_event_iscsi.a 00:03:33.011 CC app/trace_record/trace_record.o 00:03:33.011 CXX app/trace/trace.o 00:03:33.011 CC examples/ioat/perf/perf.o 00:03:33.011 CC examples/accel/perf/accel_perf.o 00:03:33.011 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.011 CC app/nvmf_tgt/nvmf_main.o 00:03:33.011 CC examples/nvme/hello_world/hello_world.o 00:03:33.011 CC test/accel/dif/dif.o 00:03:33.011 CC examples/bdev/hello_world/hello_bdev.o 00:03:33.011 CC examples/blob/hello_world/hello_blob.o 00:03:33.270 LINK nvmf_tgt 00:03:33.270 LINK iscsi_tgt 00:03:33.270 LINK spdk_trace_record 00:03:33.270 LINK hello_world 00:03:33.270 LINK ioat_perf 00:03:33.270 LINK hello_bdev 00:03:33.528 LINK spdk_trace 00:03:33.528 LINK hello_blob 00:03:33.528 LINK dif 00:03:33.528 LINK accel_perf 00:03:34.094 CC examples/blob/cli/blobcli.o 00:03:34.094 CC examples/ioat/verify/verify.o 00:03:34.094 CC app/spdk_tgt/spdk_tgt.o 00:03:34.094 LINK spdk_tgt 00:03:34.094 LINK verify 00:03:34.353 LINK blobcli 00:03:34.611 CC examples/nvme/reconnect/reconnect.o 00:03:34.611 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.870 CC examples/nvme/arbitration/arbitration.o 00:03:34.870 LINK reconnect 00:03:35.129 LINK arbitration 00:03:35.129 LINK nvme_manage 00:03:35.696 CC examples/nvme/hotplug/hotplug.o 00:03:35.955 LINK hotplug 00:03:36.213 CC test/bdev/bdevio/bdevio.o 00:03:36.213 CC test/app/bdev_svc/bdev_svc.o 00:03:36.213 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:36.472 LINK bdev_svc 00:03:36.472 CC test/app/histogram_perf/histogram_perf.o 00:03:36.472 LINK bdevio 00:03:36.472 LINK histogram_perf 00:03:36.731 LINK nvme_fuzz 00:03:36.990 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.990 CC test/app/jsoncat/jsoncat.o 00:03:36.990 LINK jsoncat 00:03:37.249 CC test/app/stub/stub.o 00:03:37.249 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.249 CC examples/nvme/abort/abort.o 00:03:37.508 LINK stub 00:03:37.508 LINK cmb_copy 00:03:37.508 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.508 CC app/spdk_lspci/spdk_lspci.o 00:03:37.767 LINK pmr_persistence 00:03:37.767 LINK spdk_lspci 00:03:37.767 LINK abort 00:03:37.767 CC examples/sock/hello_world/hello_sock.o 00:03:37.767 LINK bdevperf 00:03:38.026 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.026 LINK hello_sock 00:03:38.592 CC examples/vmd/lsvmd/lsvmd.o 00:03:38.592 CC examples/vmd/led/led.o 00:03:38.592 LINK lsvmd 00:03:38.884 LINK led 00:03:38.884 CC examples/util/zipf/zipf.o 00:03:38.884 CC examples/nvmf/nvmf/nvmf.o 00:03:38.884 CC app/spdk_nvme_perf/perf.o 00:03:38.884 CC examples/thread/thread/thread_ex.o 00:03:39.147 LINK zipf 00:03:39.147 CC test/blobfs/mkfs/mkfs.o 00:03:39.147 LINK nvmf 00:03:39.147 LINK thread 00:03:39.405 LINK mkfs 00:03:39.405 TEST_HEADER include/spdk/accel.h 00:03:39.405 TEST_HEADER include/spdk/accel_module.h 00:03:39.405 TEST_HEADER include/spdk/assert.h 00:03:39.405 TEST_HEADER include/spdk/barrier.h 00:03:39.405 TEST_HEADER include/spdk/base64.h 00:03:39.405 TEST_HEADER include/spdk/bdev.h 00:03:39.405 TEST_HEADER include/spdk/bdev_module.h 00:03:39.405 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.405 TEST_HEADER include/spdk/bit_array.h 00:03:39.405 TEST_HEADER include/spdk/bit_pool.h 00:03:39.405 TEST_HEADER include/spdk/blob.h 00:03:39.405 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.405 TEST_HEADER include/spdk/blobfs.h 00:03:39.405 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.405 TEST_HEADER include/spdk/conf.h 00:03:39.405 TEST_HEADER include/spdk/config.h 00:03:39.405 TEST_HEADER include/spdk/cpuset.h 00:03:39.405 TEST_HEADER include/spdk/crc16.h 00:03:39.663 TEST_HEADER include/spdk/crc32.h 00:03:39.664 TEST_HEADER include/spdk/crc64.h 00:03:39.664 TEST_HEADER include/spdk/dif.h 00:03:39.664 TEST_HEADER include/spdk/dma.h 00:03:39.664 TEST_HEADER include/spdk/endian.h 00:03:39.664 TEST_HEADER include/spdk/env.h 00:03:39.664 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.664 TEST_HEADER include/spdk/event.h 00:03:39.664 TEST_HEADER include/spdk/fd.h 00:03:39.664 TEST_HEADER include/spdk/fd_group.h 00:03:39.664 TEST_HEADER include/spdk/file.h 00:03:39.664 TEST_HEADER include/spdk/ftl.h 00:03:39.664 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.664 TEST_HEADER include/spdk/hexlify.h 00:03:39.664 TEST_HEADER include/spdk/histogram_data.h 00:03:39.664 TEST_HEADER include/spdk/idxd.h 00:03:39.664 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.664 TEST_HEADER include/spdk/init.h 00:03:39.664 TEST_HEADER include/spdk/ioat.h 00:03:39.664 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.664 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.664 TEST_HEADER include/spdk/json.h 00:03:39.664 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.664 TEST_HEADER include/spdk/likely.h 00:03:39.664 TEST_HEADER include/spdk/log.h 00:03:39.664 TEST_HEADER include/spdk/lvol.h 00:03:39.664 TEST_HEADER include/spdk/memory.h 00:03:39.664 TEST_HEADER include/spdk/mmio.h 00:03:39.664 TEST_HEADER include/spdk/nbd.h 00:03:39.664 TEST_HEADER include/spdk/notify.h 00:03:39.664 TEST_HEADER include/spdk/nvme.h 00:03:39.664 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.664 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.664 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.664 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.664 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.664 TEST_HEADER include/spdk/nvmf.h 00:03:39.664 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.664 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.664 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.664 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.664 TEST_HEADER include/spdk/opal.h 00:03:39.664 TEST_HEADER include/spdk/opal_spec.h 00:03:39.664 TEST_HEADER include/spdk/pci_ids.h 00:03:39.664 TEST_HEADER include/spdk/pipe.h 00:03:39.664 TEST_HEADER include/spdk/queue.h 00:03:39.664 TEST_HEADER include/spdk/reduce.h 00:03:39.664 TEST_HEADER include/spdk/rpc.h 00:03:39.664 TEST_HEADER include/spdk/scheduler.h 00:03:39.664 TEST_HEADER include/spdk/scsi.h 00:03:39.664 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.664 TEST_HEADER include/spdk/sock.h 00:03:39.664 TEST_HEADER include/spdk/stdinc.h 00:03:39.664 TEST_HEADER include/spdk/string.h 00:03:39.664 TEST_HEADER include/spdk/thread.h 00:03:39.664 TEST_HEADER include/spdk/trace.h 00:03:39.664 TEST_HEADER include/spdk/trace_parser.h 00:03:39.664 TEST_HEADER include/spdk/tree.h 00:03:39.664 TEST_HEADER include/spdk/ublk.h 00:03:39.664 TEST_HEADER include/spdk/util.h 00:03:39.664 TEST_HEADER include/spdk/uuid.h 00:03:39.664 TEST_HEADER include/spdk/version.h 00:03:39.664 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.664 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.664 TEST_HEADER include/spdk/vhost.h 00:03:39.664 TEST_HEADER include/spdk/vmd.h 00:03:39.664 TEST_HEADER include/spdk/xor.h 00:03:39.664 TEST_HEADER include/spdk/zipf.h 00:03:39.664 CC test/dma/test_dma/test_dma.o 00:03:39.664 CXX test/cpp_headers/accel.o 00:03:39.664 CXX test/cpp_headers/accel_module.o 00:03:39.922 CC examples/idxd/perf/perf.o 00:03:39.922 CXX test/cpp_headers/assert.o 00:03:39.922 LINK spdk_nvme_perf 00:03:40.181 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.181 LINK test_dma 00:03:40.181 LINK iscsi_fuzz 00:03:40.181 CXX test/cpp_headers/barrier.o 00:03:40.439 LINK mem_callbacks 00:03:40.439 LINK idxd_perf 00:03:40.439 CXX test/cpp_headers/base64.o 00:03:40.698 CC test/env/vtophys/vtophys.o 00:03:40.956 LINK vtophys 00:03:40.956 CXX test/cpp_headers/bdev.o 00:03:40.956 CC test/event/event_perf/event_perf.o 00:03:41.215 LINK event_perf 00:03:41.215 CXX test/cpp_headers/bdev_module.o 00:03:41.215 CC app/spdk_nvme_identify/identify.o 00:03:41.215 CXX test/cpp_headers/bdev_zone.o 00:03:41.215 CC app/spdk_nvme_discover/discovery_aer.o 00:03:41.474 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:41.474 CXX test/cpp_headers/bit_array.o 00:03:41.474 LINK spdk_nvme_discover 00:03:41.474 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:41.732 CXX test/cpp_headers/bit_pool.o 00:03:41.732 CXX test/cpp_headers/blob.o 00:03:41.732 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.732 CC test/event/reactor/reactor.o 00:03:41.732 CXX test/cpp_headers/blob_bdev.o 00:03:41.991 CC test/event/reactor_perf/reactor_perf.o 00:03:41.991 LINK env_dpdk_post_init 00:03:41.991 LINK reactor 00:03:41.991 LINK reactor_perf 00:03:41.991 LINK spdk_nvme_identify 00:03:41.991 LINK vhost_fuzz 00:03:41.991 CXX test/cpp_headers/blobfs.o 00:03:41.991 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:42.251 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.251 LINK interrupt_tgt 00:03:42.251 CXX test/cpp_headers/conf.o 00:03:42.510 CC app/spdk_top/spdk_top.o 00:03:42.510 CXX test/cpp_headers/config.o 00:03:42.510 CXX test/cpp_headers/cpuset.o 00:03:42.510 CC test/env/memory/memory_ut.o 00:03:42.769 CXX test/cpp_headers/crc16.o 00:03:42.769 CC test/event/app_repeat/app_repeat.o 00:03:42.769 CXX test/cpp_headers/crc32.o 00:03:42.769 CXX test/cpp_headers/crc64.o 00:03:42.769 CXX test/cpp_headers/dif.o 00:03:43.029 LINK app_repeat 00:03:43.029 CXX test/cpp_headers/dma.o 00:03:43.029 CXX test/cpp_headers/endian.o 00:03:43.029 CC test/env/pci/pci_ut.o 00:03:43.029 LINK memory_ut 00:03:43.029 CC app/vhost/vhost.o 00:03:43.029 CXX test/cpp_headers/env.o 00:03:43.029 CC app/spdk_dd/spdk_dd.o 00:03:43.288 CC app/fio/nvme/fio_plugin.o 00:03:43.288 CC app/fio/bdev/fio_plugin.o 00:03:43.288 CXX test/cpp_headers/env_dpdk.o 00:03:43.288 LINK vhost 00:03:43.288 CC test/lvol/esnap/esnap.o 00:03:43.288 LINK spdk_top 00:03:43.547 CXX test/cpp_headers/event.o 00:03:43.547 LINK pci_ut 00:03:43.547 LINK spdk_dd 00:03:43.547 CXX test/cpp_headers/fd.o 00:03:43.547 CXX test/cpp_headers/fd_group.o 00:03:43.807 LINK spdk_bdev 00:03:43.807 CXX test/cpp_headers/file.o 00:03:43.807 LINK spdk_nvme 00:03:43.807 CC test/nvme/aer/aer.o 00:03:43.807 CC test/nvme/reset/reset.o 00:03:43.807 CXX test/cpp_headers/ftl.o 00:03:43.807 CXX test/cpp_headers/gpt_spec.o 00:03:44.067 CC test/event/scheduler/scheduler.o 00:03:44.067 CXX test/cpp_headers/hexlify.o 00:03:44.067 CC test/nvme/sgl/sgl.o 00:03:44.067 LINK reset 00:03:44.067 LINK aer 00:03:44.327 CXX test/cpp_headers/histogram_data.o 00:03:44.327 LINK scheduler 00:03:44.327 CXX test/cpp_headers/idxd.o 00:03:44.327 LINK sgl 00:03:44.586 CXX test/cpp_headers/idxd_spec.o 00:03:44.586 CXX test/cpp_headers/init.o 00:03:44.845 CXX test/cpp_headers/ioat.o 00:03:44.845 CC test/rpc_client/rpc_client_test.o 00:03:45.105 CXX test/cpp_headers/ioat_spec.o 00:03:45.105 CXX test/cpp_headers/iscsi_spec.o 00:03:45.105 LINK rpc_client_test 00:03:45.105 CXX test/cpp_headers/json.o 00:03:45.105 CC test/thread/poller_perf/poller_perf.o 00:03:45.105 CC test/nvme/e2edp/nvme_dp.o 00:03:45.363 CXX test/cpp_headers/jsonrpc.o 00:03:45.363 LINK poller_perf 00:03:45.363 CXX test/cpp_headers/likely.o 00:03:45.622 CXX test/cpp_headers/log.o 00:03:45.622 CXX test/cpp_headers/lvol.o 00:03:45.622 LINK nvme_dp 00:03:45.622 CC test/nvme/overhead/overhead.o 00:03:45.622 CXX test/cpp_headers/memory.o 00:03:45.622 CC test/nvme/err_injection/err_injection.o 00:03:45.881 CXX test/cpp_headers/mmio.o 00:03:45.881 CC test/thread/lock/spdk_lock.o 00:03:45.881 LINK overhead 00:03:45.881 LINK err_injection 00:03:46.141 CXX test/cpp_headers/nbd.o 00:03:46.141 CXX test/cpp_headers/notify.o 00:03:46.141 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:46.141 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:46.141 CXX test/cpp_headers/nvme.o 00:03:46.400 LINK histogram_ut 00:03:46.400 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:46.400 CXX test/cpp_headers/nvme_intel.o 00:03:46.400 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:46.400 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:46.400 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.659 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.918 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:46.918 CXX test/cpp_headers/nvme_spec.o 00:03:46.918 LINK blob_bdev_ut 00:03:46.918 CC test/nvme/startup/startup.o 00:03:47.178 LINK scsi_nvme_ut 00:03:47.178 CXX test/cpp_headers/nvme_zns.o 00:03:47.178 LINK startup 00:03:47.178 CC test/nvme/reserve/reserve.o 00:03:47.178 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:47.437 CXX test/cpp_headers/nvmf.o 00:03:47.437 LINK reserve 00:03:47.437 CXX test/cpp_headers/nvmf_cmd.o 00:03:47.437 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:47.697 LINK spdk_lock 00:03:47.697 CXX test/cpp_headers/nvmf_spec.o 00:03:47.697 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:47.957 CXX test/cpp_headers/nvmf_transport.o 00:03:47.957 LINK tree_ut 00:03:47.957 CXX test/cpp_headers/opal.o 00:03:48.218 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:48.218 CXX test/cpp_headers/opal_spec.o 00:03:48.218 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:48.218 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:48.478 CC test/nvme/simple_copy/simple_copy.o 00:03:48.478 CXX test/cpp_headers/pci_ids.o 00:03:48.478 LINK accel_ut 00:03:48.478 CXX test/cpp_headers/pipe.o 00:03:48.478 LINK simple_copy 00:03:48.478 LINK esnap 00:03:48.739 CXX test/cpp_headers/queue.o 00:03:48.739 LINK gpt_ut 00:03:48.739 CXX test/cpp_headers/reduce.o 00:03:48.739 CXX test/cpp_headers/rpc.o 00:03:48.739 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:48.739 CXX test/cpp_headers/scheduler.o 00:03:49.134 CXX test/cpp_headers/scsi.o 00:03:49.134 CXX test/cpp_headers/scsi_spec.o 00:03:49.134 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:49.134 LINK dma_ut 00:03:49.134 CXX test/cpp_headers/sock.o 00:03:49.394 LINK vbdev_lvol_ut 00:03:49.394 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:49.394 LINK blobfs_async_ut 00:03:49.394 CXX test/cpp_headers/stdinc.o 00:03:49.394 CC test/unit/lib/event/app.c/app_ut.o 00:03:49.653 CXX test/cpp_headers/string.o 00:03:49.653 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:49.653 CC test/nvme/connect_stress/connect_stress.o 00:03:49.653 LINK part_ut 00:03:49.653 CXX test/cpp_headers/thread.o 00:03:49.913 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:49.913 LINK connect_stress 00:03:49.913 CXX test/cpp_headers/trace.o 00:03:49.913 CXX test/cpp_headers/trace_parser.o 00:03:50.172 LINK app_ut 00:03:50.172 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:50.172 CXX test/cpp_headers/tree.o 00:03:50.172 CXX test/cpp_headers/ublk.o 00:03:50.431 CXX test/cpp_headers/util.o 00:03:50.431 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:50.431 LINK bdev_zone_ut 00:03:50.431 CXX test/cpp_headers/uuid.o 00:03:50.431 LINK reactor_ut 00:03:50.690 CXX test/cpp_headers/version.o 00:03:50.690 CXX test/cpp_headers/vfio_user_pci.o 00:03:50.690 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:50.949 LINK bdev_raid_sb_ut 00:03:50.949 CXX test/cpp_headers/vfio_user_spec.o 00:03:50.949 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:50.949 CC test/nvme/boot_partition/boot_partition.o 00:03:50.949 CXX test/cpp_headers/vhost.o 00:03:50.949 LINK blobfs_sync_ut 00:03:51.208 LINK boot_partition 00:03:51.208 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:51.208 LINK concat_ut 00:03:51.208 CXX test/cpp_headers/vmd.o 00:03:51.208 LINK bdev_raid_ut 00:03:51.467 LINK ioat_ut 00:03:51.467 CXX test/cpp_headers/xor.o 00:03:51.467 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:51.467 CXX test/cpp_headers/zipf.o 00:03:51.467 CC test/nvme/compliance/nvme_compliance.o 00:03:51.467 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:51.726 CC test/nvme/fused_ordering/fused_ordering.o 00:03:51.726 LINK bdev_ut 00:03:51.726 LINK blobfs_bdev_ut 00:03:51.726 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:51.726 LINK fused_ordering 00:03:51.985 CC test/nvme/fdp/fdp.o 00:03:51.985 LINK nvme_compliance 00:03:51.985 LINK doorbell_aers 00:03:51.985 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:52.244 LINK raid1_ut 00:03:52.244 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:52.502 LINK conn_ut 00:03:52.502 LINK fdp 00:03:52.502 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:52.502 LINK init_grp_ut 00:03:52.761 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:52.761 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:53.019 CC test/nvme/cuse/cuse.o 00:03:53.019 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:53.019 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:53.019 LINK vbdev_zone_block_ut 00:03:53.277 LINK bdev_ut 00:03:53.277 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:53.535 LINK jsonrpc_server_ut 00:03:53.535 LINK raid5f_ut 00:03:53.535 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:53.794 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:53.794 CC test/unit/lib/log/log.c/log_ut.o 00:03:53.794 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:53.794 LINK cuse 00:03:54.053 LINK json_util_ut 00:03:54.053 LINK log_ut 00:03:54.053 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:54.311 LINK blob_ut 00:03:54.311 LINK param_ut 00:03:54.311 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:54.311 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:54.311 LINK json_write_ut 00:03:54.570 LINK portal_grp_ut 00:03:54.570 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:54.570 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:54.570 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:54.829 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:54.829 LINK notify_ut 00:03:54.829 LINK tgt_node_ut 00:03:55.088 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:55.088 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:55.347 LINK iscsi_ut 00:03:55.606 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:55.606 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:55.606 LINK json_parse_ut 00:03:55.881 LINK nvme_ut 00:03:55.881 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:55.881 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:55.881 LINK nvme_ctrlr_cmd_ut 00:03:56.151 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:56.151 LINK dev_ut 00:03:56.151 LINK lvol_ut 00:03:56.152 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:56.410 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:56.669 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:56.669 LINK nvme_ns_ut 00:03:56.928 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:57.187 LINK lun_ut 00:03:57.187 LINK bdev_nvme_ut 00:03:57.447 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:57.705 LINK scsi_ut 00:03:57.705 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:57.705 LINK nvme_ctrlr_ut 00:03:57.705 LINK ctrlr_discovery_ut 00:03:57.705 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:57.964 LINK posix_ut 00:03:57.964 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:57.964 LINK subsystem_ut 00:03:57.964 LINK nvme_ns_cmd_ut 00:03:58.223 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:58.223 LINK sock_ut 00:03:58.223 LINK ctrlr_ut 00:03:58.223 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:58.223 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:58.482 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:58.482 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:58.482 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:58.742 LINK ctrlr_bdev_ut 00:03:58.742 LINK scsi_bdev_ut 00:03:59.001 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:59.001 LINK scsi_pr_ut 00:03:59.001 LINK tcp_ut 00:03:59.001 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:59.259 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:59.259 LINK nvmf_ut 00:03:59.518 LINK nvme_ns_ocssd_cmd_ut 00:03:59.518 LINK nvme_poll_group_ut 00:03:59.518 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:59.518 LINK nvme_quirks_ut 00:03:59.777 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:59.777 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:59.777 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:59.777 LINK base64_ut 00:03:59.777 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:59.777 LINK nvme_pcie_ut 00:04:00.036 LINK cpuset_ut 00:04:00.036 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:00.036 LINK nvme_qpair_ut 00:04:00.036 LINK bit_array_ut 00:04:00.036 LINK crc16_ut 00:04:00.036 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:00.295 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:00.295 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:00.295 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:00.295 LINK crc32_ieee_ut 00:04:00.295 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:00.295 LINK crc32c_ut 00:04:00.295 LINK crc64_ut 00:04:00.295 LINK iobuf_ut 00:04:00.554 CC test/unit/lib/util/math.c/math_ut.o 00:04:00.554 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:00.554 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:00.554 LINK iov_ut 00:04:00.554 LINK math_ut 00:04:00.812 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:00.812 CC test/unit/lib/util/string.c/string_ut.o 00:04:00.812 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:01.071 LINK pipe_ut 00:04:01.071 LINK string_ut 00:04:01.330 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:01.330 LINK transport_ut 00:04:01.330 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:01.330 LINK nvme_transport_ut 00:04:01.330 LINK nvme_io_msg_ut 00:04:01.330 LINK dif_ut 00:04:01.589 LINK xor_ut 00:04:01.589 LINK thread_ut 00:04:01.589 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:01.589 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:01.589 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:01.848 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:01.848 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:01.848 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:01.848 LINK rdma_ut 00:04:02.107 LINK pci_event_ut 00:04:02.107 LINK subsystem_ut 00:04:02.107 LINK nvme_fabric_ut 00:04:02.107 LINK nvme_pcie_common_ut 00:04:02.107 LINK rpc_ut 00:04:02.367 LINK nvme_tcp_ut 00:04:02.367 LINK idxd_user_ut 00:04:02.367 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:02.367 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:02.367 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:02.367 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:02.367 LINK nvme_opal_ut 00:04:02.367 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:02.626 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:02.626 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:02.626 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:02.626 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:02.626 LINK idxd_ut 00:04:02.626 LINK ftl_bitmap_ut 00:04:02.885 LINK ftl_l2p_ut 00:04:02.885 LINK common_ut 00:04:02.885 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:02.885 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:02.885 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:02.885 LINK ftl_mempool_ut 00:04:03.144 LINK ftl_io_ut 00:04:03.403 LINK ftl_mngt_ut 00:04:03.662 LINK ftl_band_ut 00:04:03.921 LINK nvme_cuse_ut 00:04:04.179 LINK vhost_ut 00:04:04.179 LINK ftl_layout_upgrade_ut 00:04:04.179 LINK ftl_sb_ut 00:04:04.179 LINK nvme_rdma_ut 00:04:04.437 00:04:04.437 real 1m34.282s 00:04:04.437 user 8m5.273s 00:04:04.437 sys 1m37.038s 00:04:04.437 14:05:56 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:04.437 14:05:56 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.437 ************************************ 00:04:04.437 END TEST unittest_build 00:04:04.437 ************************************ 00:04:04.697 14:05:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:04.697 14:05:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:04.697 14:05:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:04.697 14:05:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:04.697 14:05:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:04.697 14:05:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:04.697 14:05:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:04.697 14:05:56 -- scripts/common.sh@335 -- # IFS=.-: 00:04:04.697 14:05:56 -- scripts/common.sh@335 -- # read -ra ver1 00:04:04.697 14:05:56 -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.697 14:05:56 -- scripts/common.sh@336 -- # read -ra ver2 00:04:04.697 14:05:56 -- scripts/common.sh@337 -- # local 'op=<' 00:04:04.697 14:05:56 -- scripts/common.sh@339 -- # ver1_l=2 00:04:04.697 14:05:56 -- scripts/common.sh@340 -- # ver2_l=1 00:04:04.697 14:05:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:04.697 14:05:56 -- scripts/common.sh@343 -- # case "$op" in 00:04:04.697 14:05:56 -- scripts/common.sh@344 -- # : 1 00:04:04.697 14:05:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:04.697 14:05:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.697 14:05:56 -- scripts/common.sh@364 -- # decimal 1 00:04:04.697 14:05:56 -- scripts/common.sh@352 -- # local d=1 00:04:04.697 14:05:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.697 14:05:56 -- scripts/common.sh@354 -- # echo 1 00:04:04.697 14:05:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:04.697 14:05:56 -- scripts/common.sh@365 -- # decimal 2 00:04:04.697 14:05:56 -- scripts/common.sh@352 -- # local d=2 00:04:04.697 14:05:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.697 14:05:56 -- scripts/common.sh@354 -- # echo 2 00:04:04.697 14:05:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:04.697 14:05:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:04.697 14:05:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:04.697 14:05:56 -- scripts/common.sh@367 -- # return 0 00:04:04.697 14:05:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.697 14:05:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.697 --rc genhtml_branch_coverage=1 00:04:04.697 --rc genhtml_function_coverage=1 00:04:04.697 --rc genhtml_legend=1 00:04:04.697 --rc geninfo_all_blocks=1 00:04:04.697 --rc geninfo_unexecuted_blocks=1 00:04:04.697 00:04:04.697 ' 00:04:04.697 14:05:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.697 --rc genhtml_branch_coverage=1 00:04:04.697 --rc genhtml_function_coverage=1 00:04:04.697 --rc genhtml_legend=1 00:04:04.697 --rc geninfo_all_blocks=1 00:04:04.697 --rc geninfo_unexecuted_blocks=1 00:04:04.697 00:04:04.697 ' 00:04:04.697 14:05:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.697 --rc genhtml_branch_coverage=1 00:04:04.697 --rc genhtml_function_coverage=1 00:04:04.697 --rc genhtml_legend=1 00:04:04.697 --rc geninfo_all_blocks=1 00:04:04.697 --rc geninfo_unexecuted_blocks=1 00:04:04.697 00:04:04.697 ' 00:04:04.697 14:05:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:04.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.697 --rc genhtml_branch_coverage=1 00:04:04.697 --rc genhtml_function_coverage=1 00:04:04.697 --rc genhtml_legend=1 00:04:04.697 --rc geninfo_all_blocks=1 00:04:04.697 --rc geninfo_unexecuted_blocks=1 00:04:04.697 00:04:04.697 ' 00:04:04.697 14:05:56 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:04.697 14:05:56 -- nvmf/common.sh@7 -- # uname -s 00:04:04.697 14:05:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.697 14:05:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.697 14:05:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.697 14:05:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.697 14:05:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.697 14:05:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.697 14:05:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.697 14:05:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.697 14:05:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.697 14:05:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.697 14:05:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:53fe008f-a2ac-405e-9902-ab5bc898d13c 00:04:04.697 14:05:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=53fe008f-a2ac-405e-9902-ab5bc898d13c 00:04:04.697 14:05:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.697 14:05:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.697 14:05:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.697 14:05:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:04.697 14:05:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.697 14:05:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.697 14:05:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.697 14:05:56 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:04.697 14:05:56 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:04.697 14:05:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:04.697 14:05:56 -- paths/export.sh@5 -- # export PATH 00:04:04.697 14:05:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:04.697 14:05:56 -- nvmf/common.sh@46 -- # : 0 00:04:04.697 14:05:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:04.697 14:05:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:04.697 14:05:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:04.697 14:05:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.697 14:05:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.697 14:05:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:04.697 14:05:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:04.697 14:05:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:04.697 14:05:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:04.697 14:05:56 -- spdk/autotest.sh@32 -- # uname -s 00:04:04.697 14:05:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:04.697 14:05:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:04.697 14:05:56 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.697 14:05:56 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:04.697 14:05:56 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:04.697 14:05:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:04.697 14:05:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:04.697 14:05:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:04.697 14:05:56 -- spdk/autotest.sh@48 -- # udevadm_pid=103963 00:04:04.697 14:05:56 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:04.697 14:05:56 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:04.697 14:05:56 -- spdk/autotest.sh@54 -- # echo 103987 00:04:04.697 14:05:56 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:04.697 14:05:56 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:04.697 14:05:56 -- spdk/autotest.sh@56 -- # echo 103991 00:04:04.697 14:05:56 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:04.697 14:05:56 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:04.697 14:05:56 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:04.698 14:05:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.698 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:04:04.698 14:05:56 -- spdk/autotest.sh@70 -- # create_test_list 00:04:04.698 14:05:56 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:04.698 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:04:04.698 14:05:56 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:04.698 14:05:56 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:04.698 14:05:56 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:04.698 14:05:56 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:04.698 14:05:56 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:04.698 14:05:56 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:04.698 14:05:56 -- common/autotest_common.sh@1450 -- # uname 00:04:04.698 14:05:56 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:04.698 14:05:56 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:04.698 14:05:56 -- common/autotest_common.sh@1470 -- # uname 00:04:04.698 14:05:56 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:04.698 14:05:56 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:04.956 14:05:56 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:04.956 lcov: LCOV version 1.15 00:04:04.956 14:05:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:23.039 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:23.039 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:23.039 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:23.039 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:23.039 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:23.039 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:49.588 14:06:37 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:49.588 14:06:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.588 14:06:37 -- common/autotest_common.sh@10 -- # set +x 00:04:49.588 14:06:37 -- spdk/autotest.sh@89 -- # rm -f 00:04:49.588 14:06:37 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:49.588 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:49.588 14:06:38 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:49.588 14:06:38 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:49.588 14:06:38 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:49.588 14:06:38 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:49.588 14:06:38 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:49.588 14:06:38 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:49.588 14:06:38 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:49.588 14:06:38 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.588 14:06:38 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:49.588 14:06:38 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:49.588 14:06:38 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:04:49.588 14:06:38 -- spdk/autotest.sh@108 -- # grep -v p 00:04:49.588 14:06:38 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:49.588 14:06:38 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:49.588 14:06:38 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:49.588 14:06:38 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:49.588 14:06:38 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:49.588 No valid GPT data, bailing 00:04:49.588 14:06:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.588 14:06:38 -- scripts/common.sh@393 -- # pt= 00:04:49.588 14:06:38 -- scripts/common.sh@394 -- # return 1 00:04:49.588 14:06:38 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:49.588 1+0 records in 00:04:49.588 1+0 records out 00:04:49.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536711 s, 195 MB/s 00:04:49.588 14:06:38 -- spdk/autotest.sh@116 -- # sync 00:04:49.588 14:06:38 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:49.588 14:06:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:49.588 14:06:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:49.588 14:06:39 -- spdk/autotest.sh@122 -- # uname -s 00:04:49.588 14:06:39 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:49.588 14:06:39 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:49.588 14:06:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.588 14:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.588 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:04:49.588 ************************************ 00:04:49.588 START TEST setup.sh 00:04:49.588 ************************************ 00:04:49.588 14:06:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:49.588 * Looking for test storage... 00:04:49.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.588 14:06:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.588 14:06:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.588 14:06:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.588 14:06:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.588 14:06:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.588 14:06:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.588 14:06:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.588 14:06:39 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.588 14:06:39 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.588 14:06:39 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.588 14:06:39 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.588 14:06:39 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.588 14:06:39 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.588 14:06:39 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.588 14:06:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.588 14:06:39 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.588 14:06:39 -- scripts/common.sh@344 -- # : 1 00:04:49.588 14:06:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.588 14:06:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.588 14:06:39 -- scripts/common.sh@364 -- # decimal 1 00:04:49.588 14:06:39 -- scripts/common.sh@352 -- # local d=1 00:04:49.588 14:06:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.588 14:06:39 -- scripts/common.sh@354 -- # echo 1 00:04:49.588 14:06:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.588 14:06:39 -- scripts/common.sh@365 -- # decimal 2 00:04:49.588 14:06:39 -- scripts/common.sh@352 -- # local d=2 00:04:49.588 14:06:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.588 14:06:39 -- scripts/common.sh@354 -- # echo 2 00:04:49.588 14:06:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.588 14:06:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.588 14:06:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.588 14:06:39 -- scripts/common.sh@367 -- # return 0 00:04:49.589 14:06:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.589 14:06:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:39 -- setup/test-setup.sh@10 -- # uname -s 00:04:49.589 14:06:39 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:49.589 14:06:39 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:49.589 14:06:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.589 14:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.589 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 ************************************ 00:04:49.589 START TEST acl 00:04:49.589 ************************************ 00:04:49.589 14:06:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:49.589 * Looking for test storage... 00:04:49.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:49.589 14:06:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.589 14:06:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.589 14:06:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.589 14:06:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.589 14:06:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.589 14:06:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.589 14:06:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.589 14:06:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.589 14:06:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.589 14:06:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.589 14:06:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.589 14:06:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.589 14:06:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.589 14:06:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.589 14:06:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.589 14:06:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.589 14:06:40 -- scripts/common.sh@344 -- # : 1 00:04:49.589 14:06:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.589 14:06:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.589 14:06:40 -- scripts/common.sh@364 -- # decimal 1 00:04:49.589 14:06:40 -- scripts/common.sh@352 -- # local d=1 00:04:49.589 14:06:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.589 14:06:40 -- scripts/common.sh@354 -- # echo 1 00:04:49.589 14:06:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.589 14:06:40 -- scripts/common.sh@365 -- # decimal 2 00:04:49.589 14:06:40 -- scripts/common.sh@352 -- # local d=2 00:04:49.589 14:06:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.589 14:06:40 -- scripts/common.sh@354 -- # echo 2 00:04:49.589 14:06:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.589 14:06:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.589 14:06:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.589 14:06:40 -- scripts/common.sh@367 -- # return 0 00:04:49.589 14:06:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.589 14:06:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.589 --rc genhtml_branch_coverage=1 00:04:49.589 --rc genhtml_function_coverage=1 00:04:49.589 --rc genhtml_legend=1 00:04:49.589 --rc geninfo_all_blocks=1 00:04:49.589 --rc geninfo_unexecuted_blocks=1 00:04:49.589 00:04:49.589 ' 00:04:49.589 14:06:40 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:49.589 14:06:40 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:49.589 14:06:40 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:49.589 14:06:40 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:49.589 14:06:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:49.589 14:06:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:49.589 14:06:40 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:49.589 14:06:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.589 14:06:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:49.589 14:06:40 -- setup/acl.sh@12 -- # devs=() 00:04:49.589 14:06:40 -- setup/acl.sh@12 -- # declare -a devs 00:04:49.589 14:06:40 -- setup/acl.sh@13 -- # drivers=() 00:04:49.589 14:06:40 -- setup/acl.sh@13 -- # declare -A drivers 00:04:49.589 14:06:40 -- setup/acl.sh@51 -- # setup reset 00:04:49.589 14:06:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.589 14:06:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.589 14:06:40 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:49.589 14:06:40 -- setup/acl.sh@16 -- # local dev driver 00:04:49.589 14:06:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.589 14:06:40 -- setup/acl.sh@15 -- # setup output status 00:04:49.589 14:06:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.589 14:06:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:49.589 Hugepages 00:04:49.589 node hugesize free / total 00:04:49.589 14:06:40 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:49.589 14:06:40 -- setup/acl.sh@19 -- # continue 00:04:49.589 14:06:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.589 00:04:49.589 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.589 14:06:40 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:49.589 14:06:40 -- setup/acl.sh@19 -- # continue 00:04:49.589 14:06:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.589 14:06:40 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:49.589 14:06:40 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:49.589 14:06:40 -- setup/acl.sh@20 -- # continue 00:04:49.589 14:06:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.589 14:06:40 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:49.589 14:06:40 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:49.589 14:06:40 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:49.589 14:06:40 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:49.589 14:06:40 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:49.589 14:06:40 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.589 14:06:40 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:49.589 14:06:40 -- setup/acl.sh@54 -- # run_test denied denied 00:04:49.589 14:06:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.589 14:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.589 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:04:49.589 ************************************ 00:04:49.589 START TEST denied 00:04:49.589 ************************************ 00:04:49.589 14:06:40 -- common/autotest_common.sh@1114 -- # denied 00:04:49.589 14:06:40 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:49.589 14:06:40 -- setup/acl.sh@38 -- # setup output config 00:04:49.589 14:06:40 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:49.589 14:06:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.589 14:06:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.967 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:50.968 14:06:42 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:50.968 14:06:42 -- setup/acl.sh@28 -- # local dev driver 00:04:50.968 14:06:42 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:50.968 14:06:42 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:50.968 14:06:42 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:50.968 14:06:42 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:50.968 14:06:42 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:50.968 14:06:42 -- setup/acl.sh@41 -- # setup reset 00:04:50.968 14:06:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.968 14:06:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.226 00:04:51.226 real 0m2.339s 00:04:51.226 user 0m0.492s 00:04:51.226 sys 0m1.898s 00:04:51.226 14:06:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.226 14:06:43 -- common/autotest_common.sh@10 -- # set +x 00:04:51.226 ************************************ 00:04:51.226 END TEST denied 00:04:51.226 ************************************ 00:04:51.486 14:06:43 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:51.486 14:06:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.486 14:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.486 14:06:43 -- common/autotest_common.sh@10 -- # set +x 00:04:51.486 ************************************ 00:04:51.486 START TEST allowed 00:04:51.486 ************************************ 00:04:51.486 14:06:43 -- common/autotest_common.sh@1114 -- # allowed 00:04:51.486 14:06:43 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:51.486 14:06:43 -- setup/acl.sh@45 -- # setup output config 00:04:51.486 14:06:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.486 14:06:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:51.486 14:06:43 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:53.390 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.390 14:06:45 -- setup/acl.sh@47 -- # verify 00:04:53.390 14:06:45 -- setup/acl.sh@28 -- # local dev driver 00:04:53.390 14:06:45 -- setup/acl.sh@48 -- # setup reset 00:04:53.390 14:06:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.390 14:06:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.960 00:04:53.960 real 0m2.405s 00:04:53.960 user 0m0.443s 00:04:53.960 sys 0m1.962s 00:04:53.960 14:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.960 14:06:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.960 ************************************ 00:04:53.960 END TEST allowed 00:04:53.960 ************************************ 00:04:53.960 00:04:53.960 real 0m5.849s 00:04:53.960 user 0m1.556s 00:04:53.960 sys 0m4.400s 00:04:53.960 14:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.960 14:06:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.960 ************************************ 00:04:53.960 END TEST acl 00:04:53.960 ************************************ 00:04:53.960 14:06:45 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:53.960 14:06:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.960 14:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.960 14:06:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.960 ************************************ 00:04:53.960 START TEST hugepages 00:04:53.960 ************************************ 00:04:53.960 14:06:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:53.960 * Looking for test storage... 00:04:53.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:53.960 14:06:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:53.960 14:06:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:53.960 14:06:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:53.960 14:06:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:53.960 14:06:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:53.960 14:06:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:53.960 14:06:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:53.960 14:06:46 -- scripts/common.sh@335 -- # IFS=.-: 00:04:53.960 14:06:46 -- scripts/common.sh@335 -- # read -ra ver1 00:04:53.960 14:06:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.960 14:06:46 -- scripts/common.sh@336 -- # read -ra ver2 00:04:53.960 14:06:46 -- scripts/common.sh@337 -- # local 'op=<' 00:04:53.960 14:06:46 -- scripts/common.sh@339 -- # ver1_l=2 00:04:53.960 14:06:46 -- scripts/common.sh@340 -- # ver2_l=1 00:04:53.960 14:06:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:53.960 14:06:46 -- scripts/common.sh@343 -- # case "$op" in 00:04:53.960 14:06:46 -- scripts/common.sh@344 -- # : 1 00:04:53.960 14:06:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:53.960 14:06:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.960 14:06:46 -- scripts/common.sh@364 -- # decimal 1 00:04:53.960 14:06:46 -- scripts/common.sh@352 -- # local d=1 00:04:53.960 14:06:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.960 14:06:46 -- scripts/common.sh@354 -- # echo 1 00:04:53.960 14:06:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:53.960 14:06:46 -- scripts/common.sh@365 -- # decimal 2 00:04:53.960 14:06:46 -- scripts/common.sh@352 -- # local d=2 00:04:53.960 14:06:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.960 14:06:46 -- scripts/common.sh@354 -- # echo 2 00:04:53.960 14:06:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:53.960 14:06:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:53.960 14:06:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:53.960 14:06:46 -- scripts/common.sh@367 -- # return 0 00:04:53.960 14:06:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.960 14:06:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:53.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.960 --rc genhtml_branch_coverage=1 00:04:53.960 --rc genhtml_function_coverage=1 00:04:53.960 --rc genhtml_legend=1 00:04:53.960 --rc geninfo_all_blocks=1 00:04:53.960 --rc geninfo_unexecuted_blocks=1 00:04:53.960 00:04:53.960 ' 00:04:53.960 14:06:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:53.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.960 --rc genhtml_branch_coverage=1 00:04:53.960 --rc genhtml_function_coverage=1 00:04:53.960 --rc genhtml_legend=1 00:04:53.960 --rc geninfo_all_blocks=1 00:04:53.960 --rc geninfo_unexecuted_blocks=1 00:04:53.960 00:04:53.960 ' 00:04:53.960 14:06:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:53.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.960 --rc genhtml_branch_coverage=1 00:04:53.960 --rc genhtml_function_coverage=1 00:04:53.960 --rc genhtml_legend=1 00:04:53.960 --rc geninfo_all_blocks=1 00:04:53.960 --rc geninfo_unexecuted_blocks=1 00:04:53.960 00:04:53.960 ' 00:04:53.960 14:06:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:53.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.960 --rc genhtml_branch_coverage=1 00:04:53.960 --rc genhtml_function_coverage=1 00:04:53.960 --rc genhtml_legend=1 00:04:53.960 --rc geninfo_all_blocks=1 00:04:53.960 --rc geninfo_unexecuted_blocks=1 00:04:53.960 00:04:53.960 ' 00:04:53.960 14:06:46 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:53.960 14:06:46 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:53.960 14:06:46 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:53.961 14:06:46 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:53.961 14:06:46 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:53.961 14:06:46 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:53.961 14:06:46 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:53.961 14:06:46 -- setup/common.sh@18 -- # local node= 00:04:53.961 14:06:46 -- setup/common.sh@19 -- # local var val 00:04:53.961 14:06:46 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.961 14:06:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.961 14:06:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.961 14:06:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.961 14:06:46 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.961 14:06:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.961 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.961 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.961 14:06:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 2118460 kB' 'MemAvailable: 7382888 kB' 'Buffers: 40184 kB' 'Cached: 5323160 kB' 'SwapCached: 0 kB' 'Active: 1375708 kB' 'Inactive: 4117416 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 140364 kB' 'Active(file): 1374628 kB' 'Inactive(file): 3977052 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 204 kB' 'Writeback: 4 kB' 'AnonPages: 159096 kB' 'Mapped: 68644 kB' 'Shmem: 2600 kB' 'KReclaimable: 234052 kB' 'Slab: 302308 kB' 'SReclaimable: 234052 kB' 'SUnreclaim: 68256 kB' 'KernelStack: 4440 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024332 kB' 'Committed_AS: 517292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:53.961 14:06:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:53.961 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.221 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.221 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # continue 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.222 14:06:46 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.222 14:06:46 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:54.222 14:06:46 -- setup/common.sh@33 -- # echo 2048 00:04:54.222 14:06:46 -- setup/common.sh@33 -- # return 0 00:04:54.222 14:06:46 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:54.222 14:06:46 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:54.222 14:06:46 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:54.222 14:06:46 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:54.222 14:06:46 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:54.222 14:06:46 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:54.222 14:06:46 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:54.222 14:06:46 -- setup/hugepages.sh@207 -- # get_nodes 00:04:54.222 14:06:46 -- setup/hugepages.sh@27 -- # local node 00:04:54.222 14:06:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.222 14:06:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:54.222 14:06:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:54.222 14:06:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.222 14:06:46 -- setup/hugepages.sh@208 -- # clear_hp 00:04:54.222 14:06:46 -- setup/hugepages.sh@37 -- # local node hp 00:04:54.222 14:06:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.222 14:06:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.222 14:06:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.222 14:06:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.222 14:06:46 -- setup/hugepages.sh@41 -- # echo 0 00:04:54.222 14:06:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.222 14:06:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.222 14:06:46 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:54.222 14:06:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.222 14:06:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.222 14:06:46 -- common/autotest_common.sh@10 -- # set +x 00:04:54.222 ************************************ 00:04:54.222 START TEST default_setup 00:04:54.222 ************************************ 00:04:54.222 14:06:46 -- common/autotest_common.sh@1114 -- # default_setup 00:04:54.222 14:06:46 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:54.222 14:06:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.222 14:06:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:54.222 14:06:46 -- setup/hugepages.sh@51 -- # shift 00:04:54.222 14:06:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:54.222 14:06:46 -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.222 14:06:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.222 14:06:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.222 14:06:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:54.222 14:06:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:54.222 14:06:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.222 14:06:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.222 14:06:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:54.222 14:06:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.222 14:06:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.222 14:06:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:54.222 14:06:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.222 14:06:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:54.222 14:06:46 -- setup/hugepages.sh@73 -- # return 0 00:04:54.222 14:06:46 -- setup/hugepages.sh@137 -- # setup output 00:04:54.222 14:06:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.222 14:06:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:54.741 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.310 14:06:47 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:55.310 14:06:47 -- setup/hugepages.sh@89 -- # local node 00:04:55.310 14:06:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.310 14:06:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.310 14:06:47 -- setup/hugepages.sh@92 -- # local surp 00:04:55.310 14:06:47 -- setup/hugepages.sh@93 -- # local resv 00:04:55.310 14:06:47 -- setup/hugepages.sh@94 -- # local anon 00:04:55.310 14:06:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.310 14:06:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.310 14:06:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.310 14:06:47 -- setup/common.sh@18 -- # local node= 00:04:55.310 14:06:47 -- setup/common.sh@19 -- # local var val 00:04:55.310 14:06:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.310 14:06:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.310 14:06:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.310 14:06:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.310 14:06:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.310 14:06:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.310 14:06:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4215152 kB' 'MemAvailable: 9479564 kB' 'Buffers: 40184 kB' 'Cached: 5323160 kB' 'SwapCached: 0 kB' 'Active: 1375780 kB' 'Inactive: 4119704 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142720 kB' 'Active(file): 1374700 kB' 'Inactive(file): 3976984 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161332 kB' 'Mapped: 68468 kB' 'Shmem: 2596 kB' 'KReclaimable: 234032 kB' 'Slab: 302400 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68368 kB' 'KernelStack: 4400 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:55.310 14:06:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.310 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.310 14:06:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.310 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.310 14:06:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.310 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.310 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.311 14:06:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.311 14:06:47 -- setup/common.sh@33 -- # echo 0 00:04:55.311 14:06:47 -- setup/common.sh@33 -- # return 0 00:04:55.311 14:06:47 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.311 14:06:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.311 14:06:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.311 14:06:47 -- setup/common.sh@18 -- # local node= 00:04:55.311 14:06:47 -- setup/common.sh@19 -- # local var val 00:04:55.311 14:06:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.311 14:06:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.311 14:06:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.311 14:06:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.311 14:06:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.311 14:06:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.311 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4215928 kB' 'MemAvailable: 9480340 kB' 'Buffers: 40184 kB' 'Cached: 5323164 kB' 'SwapCached: 0 kB' 'Active: 1375780 kB' 'Inactive: 4119240 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142256 kB' 'Active(file): 1374700 kB' 'Inactive(file): 3976984 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 160868 kB' 'Mapped: 68428 kB' 'Shmem: 2596 kB' 'KReclaimable: 234032 kB' 'Slab: 302400 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68368 kB' 'KernelStack: 4368 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.312 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.312 14:06:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.573 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.573 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.574 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.574 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.574 14:06:47 -- setup/common.sh@33 -- # echo 0 00:04:55.574 14:06:47 -- setup/common.sh@33 -- # return 0 00:04:55.574 14:06:47 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.574 14:06:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.574 14:06:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.574 14:06:47 -- setup/common.sh@18 -- # local node= 00:04:55.575 14:06:47 -- setup/common.sh@19 -- # local var val 00:04:55.575 14:06:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.575 14:06:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.575 14:06:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.575 14:06:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.575 14:06:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.575 14:06:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4215912 kB' 'MemAvailable: 9480324 kB' 'Buffers: 40184 kB' 'Cached: 5323164 kB' 'SwapCached: 0 kB' 'Active: 1375772 kB' 'Inactive: 4119700 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 142716 kB' 'Active(file): 1374700 kB' 'Inactive(file): 3976984 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161332 kB' 'Mapped: 68424 kB' 'Shmem: 2596 kB' 'KReclaimable: 234032 kB' 'Slab: 302272 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68240 kB' 'KernelStack: 4432 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.575 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.575 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.576 14:06:47 -- setup/common.sh@33 -- # echo 0 00:04:55.576 14:06:47 -- setup/common.sh@33 -- # return 0 00:04:55.576 nr_hugepages=1024 00:04:55.576 resv_hugepages=0 00:04:55.576 surplus_hugepages=0 00:04:55.576 anon_hugepages=0 00:04:55.576 14:06:47 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.576 14:06:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.576 14:06:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.576 14:06:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.576 14:06:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.576 14:06:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.576 14:06:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.576 14:06:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.576 14:06:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.576 14:06:47 -- setup/common.sh@18 -- # local node= 00:04:55.576 14:06:47 -- setup/common.sh@19 -- # local var val 00:04:55.576 14:06:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.576 14:06:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.576 14:06:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.576 14:06:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.576 14:06:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.576 14:06:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4215912 kB' 'MemAvailable: 9480324 kB' 'Buffers: 40184 kB' 'Cached: 5323164 kB' 'SwapCached: 0 kB' 'Active: 1375772 kB' 'Inactive: 4119368 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 142384 kB' 'Active(file): 1374700 kB' 'Inactive(file): 3976984 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161052 kB' 'Mapped: 68424 kB' 'Shmem: 2596 kB' 'KReclaimable: 234032 kB' 'Slab: 302272 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68240 kB' 'KernelStack: 4496 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.576 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.576 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.577 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.577 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.577 14:06:47 -- setup/common.sh@33 -- # echo 1024 00:04:55.577 14:06:47 -- setup/common.sh@33 -- # return 0 00:04:55.577 14:06:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.577 14:06:47 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.577 14:06:47 -- setup/hugepages.sh@27 -- # local node 00:04:55.577 14:06:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.577 14:06:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.577 14:06:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:55.577 14:06:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.577 14:06:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.578 14:06:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.578 14:06:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.578 14:06:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.578 14:06:47 -- setup/common.sh@18 -- # local node=0 00:04:55.578 14:06:47 -- setup/common.sh@19 -- # local var val 00:04:55.578 14:06:47 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.578 14:06:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.578 14:06:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.578 14:06:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.578 14:06:47 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.578 14:06:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4215912 kB' 'MemUsed: 8027056 kB' 'SwapCached: 0 kB' 'Active: 1375772 kB' 'Inactive: 4119368 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 142384 kB' 'Active(file): 1374700 kB' 'Inactive(file): 3976984 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'FilePages: 5363348 kB' 'Mapped: 68424 kB' 'AnonPages: 161052 kB' 'Shmem: 2596 kB' 'KernelStack: 4496 kB' 'PageTables: 3736 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234032 kB' 'Slab: 302272 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.578 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.578 14:06:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # continue 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.579 14:06:47 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.579 14:06:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.579 14:06:47 -- setup/common.sh@33 -- # echo 0 00:04:55.579 14:06:47 -- setup/common.sh@33 -- # return 0 00:04:55.579 14:06:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.579 14:06:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.579 14:06:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.579 14:06:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.579 14:06:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:55.579 node0=1024 expecting 1024 00:04:55.579 14:06:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:55.579 00:04:55.579 real 0m1.386s 00:04:55.579 user 0m0.360s 00:04:55.579 sys 0m1.010s 00:04:55.579 14:06:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.579 14:06:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.579 ************************************ 00:04:55.579 END TEST default_setup 00:04:55.579 ************************************ 00:04:55.579 14:06:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:55.579 14:06:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.579 14:06:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.579 14:06:47 -- common/autotest_common.sh@10 -- # set +x 00:04:55.579 ************************************ 00:04:55.579 START TEST per_node_1G_alloc 00:04:55.579 ************************************ 00:04:55.579 14:06:47 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:55.579 14:06:47 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:55.579 14:06:47 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:55.579 14:06:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.579 14:06:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.579 14:06:47 -- setup/hugepages.sh@51 -- # shift 00:04:55.579 14:06:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.579 14:06:47 -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.579 14:06:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.579 14:06:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.579 14:06:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.579 14:06:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.579 14:06:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.579 14:06:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.579 14:06:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:55.579 14:06:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.579 14:06:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.579 14:06:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.579 14:06:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.579 14:06:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:55.579 14:06:47 -- setup/hugepages.sh@73 -- # return 0 00:04:55.579 14:06:47 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:55.579 14:06:47 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:55.579 14:06:47 -- setup/hugepages.sh@146 -- # setup output 00:04:55.579 14:06:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.579 14:06:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.839 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.839 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:56.410 14:06:48 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:56.410 14:06:48 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:56.410 14:06:48 -- setup/hugepages.sh@89 -- # local node 00:04:56.410 14:06:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.410 14:06:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.410 14:06:48 -- setup/hugepages.sh@92 -- # local surp 00:04:56.410 14:06:48 -- setup/hugepages.sh@93 -- # local resv 00:04:56.410 14:06:48 -- setup/hugepages.sh@94 -- # local anon 00:04:56.410 14:06:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.410 14:06:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.410 14:06:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.410 14:06:48 -- setup/common.sh@18 -- # local node= 00:04:56.410 14:06:48 -- setup/common.sh@19 -- # local var val 00:04:56.410 14:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.410 14:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.410 14:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.410 14:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.410 14:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.410 14:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5266932 kB' 'MemAvailable: 10531352 kB' 'Buffers: 40184 kB' 'Cached: 5323160 kB' 'SwapCached: 0 kB' 'Active: 1375820 kB' 'Inactive: 4119480 kB' 'Active(anon): 1096 kB' 'Inactive(anon): 142512 kB' 'Active(file): 1374724 kB' 'Inactive(file): 3976968 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 161252 kB' 'Mapped: 68304 kB' 'Shmem: 2588 kB' 'KReclaimable: 234032 kB' 'Slab: 302072 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68040 kB' 'KernelStack: 4460 kB' 'PageTables: 4084 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.410 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.410 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.411 14:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.411 14:06:48 -- setup/common.sh@33 -- # echo 0 00:04:56.411 14:06:48 -- setup/common.sh@33 -- # return 0 00:04:56.411 14:06:48 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.411 14:06:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.411 14:06:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.411 14:06:48 -- setup/common.sh@18 -- # local node= 00:04:56.411 14:06:48 -- setup/common.sh@19 -- # local var val 00:04:56.411 14:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.411 14:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.411 14:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.411 14:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.411 14:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.411 14:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.411 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5267444 kB' 'MemAvailable: 10531864 kB' 'Buffers: 40184 kB' 'Cached: 5323160 kB' 'SwapCached: 0 kB' 'Active: 1375808 kB' 'Inactive: 4119060 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142096 kB' 'Active(file): 1374728 kB' 'Inactive(file): 3976964 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 161132 kB' 'Mapped: 68260 kB' 'Shmem: 2588 kB' 'KReclaimable: 234032 kB' 'Slab: 302128 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68096 kB' 'KernelStack: 4408 kB' 'PageTables: 3856 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.412 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.412 14:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.413 14:06:48 -- setup/common.sh@33 -- # echo 0 00:04:56.413 14:06:48 -- setup/common.sh@33 -- # return 0 00:04:56.413 14:06:48 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.413 14:06:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.413 14:06:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.413 14:06:48 -- setup/common.sh@18 -- # local node= 00:04:56.413 14:06:48 -- setup/common.sh@19 -- # local var val 00:04:56.413 14:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.413 14:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.413 14:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.413 14:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.413 14:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.413 14:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5267912 kB' 'MemAvailable: 10532328 kB' 'Buffers: 40184 kB' 'Cached: 5323156 kB' 'SwapCached: 0 kB' 'Active: 1375792 kB' 'Inactive: 4118900 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 141940 kB' 'Active(file): 1374728 kB' 'Inactive(file): 3976960 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 160608 kB' 'Mapped: 68244 kB' 'Shmem: 2588 kB' 'KReclaimable: 234032 kB' 'Slab: 302128 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68096 kB' 'KernelStack: 4392 kB' 'PageTables: 3672 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19468 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.413 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.413 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.414 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.414 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.414 14:06:48 -- setup/common.sh@33 -- # echo 0 00:04:56.414 14:06:48 -- setup/common.sh@33 -- # return 0 00:04:56.414 14:06:48 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.414 14:06:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:56.414 nr_hugepages=512 00:04:56.414 14:06:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.414 resv_hugepages=0 00:04:56.414 14:06:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.414 surplus_hugepages=0 00:04:56.414 14:06:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.414 anon_hugepages=0 00:04:56.414 14:06:48 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.414 14:06:48 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:56.414 14:06:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.414 14:06:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.414 14:06:48 -- setup/common.sh@18 -- # local node= 00:04:56.415 14:06:48 -- setup/common.sh@19 -- # local var val 00:04:56.415 14:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.415 14:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.415 14:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.415 14:06:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.415 14:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.415 14:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5267912 kB' 'MemAvailable: 10532328 kB' 'Buffers: 40184 kB' 'Cached: 5323156 kB' 'SwapCached: 0 kB' 'Active: 1375792 kB' 'Inactive: 4118916 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 141956 kB' 'Active(file): 1374728 kB' 'Inactive(file): 3976960 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 160596 kB' 'Mapped: 68244 kB' 'Shmem: 2588 kB' 'KReclaimable: 234032 kB' 'Slab: 302128 kB' 'SReclaimable: 234032 kB' 'SUnreclaim: 68096 kB' 'KernelStack: 4392 kB' 'PageTables: 3672 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 518604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19484 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.415 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.415 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.676 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.676 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.677 14:06:48 -- setup/common.sh@33 -- # echo 512 00:04:56.677 14:06:48 -- setup/common.sh@33 -- # return 0 00:04:56.677 14:06:48 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:56.677 14:06:48 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.677 14:06:48 -- setup/hugepages.sh@27 -- # local node 00:04:56.677 14:06:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.677 14:06:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.677 14:06:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:56.677 14:06:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.677 14:06:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.677 14:06:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.677 14:06:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.677 14:06:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.677 14:06:48 -- setup/common.sh@18 -- # local node=0 00:04:56.677 14:06:48 -- setup/common.sh@19 -- # local var val 00:04:56.677 14:06:48 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.677 14:06:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.677 14:06:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.677 14:06:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.677 14:06:48 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.677 14:06:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5267172 kB' 'MemUsed: 6975796 kB' 'SwapCached: 0 kB' 'Active: 1375796 kB' 'Inactive: 4119156 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142196 kB' 'Active(file): 1374728 kB' 'Inactive(file): 3976960 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 5363348 kB' 'Mapped: 68208 kB' 'AnonPages: 161036 kB' 'Shmem: 2596 kB' 'KernelStack: 4448 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234048 kB' 'Slab: 302164 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.677 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.677 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # continue 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.678 14:06:48 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.678 14:06:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.678 14:06:48 -- setup/common.sh@33 -- # echo 0 00:04:56.678 14:06:48 -- setup/common.sh@33 -- # return 0 00:04:56.678 14:06:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.678 14:06:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.678 14:06:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.678 14:06:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.678 node0=512 expecting 512 00:04:56.678 14:06:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:56.678 00:04:56.678 real 0m1.024s 00:04:56.678 user 0m0.339s 00:04:56.678 sys 0m0.624s 00:04:56.678 14:06:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:56.678 14:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.678 ************************************ 00:04:56.678 END TEST per_node_1G_alloc 00:04:56.678 ************************************ 00:04:56.678 14:06:48 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:56.678 14:06:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.678 14:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.678 14:06:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.678 ************************************ 00:04:56.678 START TEST even_2G_alloc 00:04:56.678 ************************************ 00:04:56.678 14:06:48 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:56.678 14:06:48 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:56.678 14:06:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.678 14:06:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.678 14:06:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.678 14:06:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.678 14:06:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.678 14:06:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.678 14:06:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:56.678 14:06:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.678 14:06:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.678 14:06:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:56.678 14:06:48 -- setup/hugepages.sh@83 -- # : 0 00:04:56.678 14:06:48 -- setup/hugepages.sh@84 -- # : 0 00:04:56.678 14:06:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.678 14:06:48 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:56.678 14:06:48 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:56.678 14:06:48 -- setup/hugepages.sh@153 -- # setup output 00:04:56.678 14:06:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.678 14:06:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:56.938 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:57.877 14:06:49 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:57.877 14:06:49 -- setup/hugepages.sh@89 -- # local node 00:04:57.877 14:06:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.877 14:06:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.877 14:06:49 -- setup/hugepages.sh@92 -- # local surp 00:04:57.877 14:06:49 -- setup/hugepages.sh@93 -- # local resv 00:04:57.877 14:06:49 -- setup/hugepages.sh@94 -- # local anon 00:04:57.877 14:06:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.877 14:06:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.877 14:06:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.877 14:06:49 -- setup/common.sh@18 -- # local node= 00:04:57.877 14:06:49 -- setup/common.sh@19 -- # local var val 00:04:57.877 14:06:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.877 14:06:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.877 14:06:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.877 14:06:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.877 14:06:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.877 14:06:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216588 kB' 'MemAvailable: 9481028 kB' 'Buffers: 40192 kB' 'Cached: 5323164 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4119848 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142904 kB' 'Active(file): 1374752 kB' 'Inactive(file): 3976944 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 161588 kB' 'Mapped: 68216 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302292 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68244 kB' 'KernelStack: 4464 kB' 'PageTables: 3656 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.877 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.877 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.878 14:06:49 -- setup/common.sh@33 -- # echo 0 00:04:57.878 14:06:49 -- setup/common.sh@33 -- # return 0 00:04:57.878 14:06:49 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.878 14:06:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.878 14:06:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.878 14:06:49 -- setup/common.sh@18 -- # local node= 00:04:57.878 14:06:49 -- setup/common.sh@19 -- # local var val 00:04:57.878 14:06:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.878 14:06:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.878 14:06:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.878 14:06:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.878 14:06:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.878 14:06:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4217112 kB' 'MemAvailable: 9481556 kB' 'Buffers: 40192 kB' 'Cached: 5323168 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4119552 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142604 kB' 'Active(file): 1374752 kB' 'Inactive(file): 3976948 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 161224 kB' 'Mapped: 68172 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302156 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68108 kB' 'KernelStack: 4400 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.878 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.878 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.879 14:06:49 -- setup/common.sh@33 -- # echo 0 00:04:57.879 14:06:49 -- setup/common.sh@33 -- # return 0 00:04:57.879 14:06:49 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.879 14:06:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.879 14:06:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.879 14:06:49 -- setup/common.sh@18 -- # local node= 00:04:57.879 14:06:49 -- setup/common.sh@19 -- # local var val 00:04:57.879 14:06:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.879 14:06:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.879 14:06:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.879 14:06:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.879 14:06:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.879 14:06:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4217868 kB' 'MemAvailable: 9482312 kB' 'Buffers: 40192 kB' 'Cached: 5323168 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4119480 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142532 kB' 'Active(file): 1374752 kB' 'Inactive(file): 3976948 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 161152 kB' 'Mapped: 68212 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302156 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68108 kB' 'KernelStack: 4384 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.879 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.879 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.880 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.880 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.881 14:06:49 -- setup/common.sh@33 -- # echo 0 00:04:57.881 14:06:49 -- setup/common.sh@33 -- # return 0 00:04:57.881 14:06:49 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.881 14:06:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:57.881 nr_hugepages=1024 00:04:57.881 14:06:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.881 resv_hugepages=0 00:04:57.881 14:06:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.881 surplus_hugepages=0 00:04:57.881 14:06:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.881 anon_hugepages=0 00:04:57.881 14:06:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.881 14:06:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:57.881 14:06:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.881 14:06:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.881 14:06:49 -- setup/common.sh@18 -- # local node= 00:04:57.881 14:06:49 -- setup/common.sh@19 -- # local var val 00:04:57.881 14:06:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.881 14:06:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.881 14:06:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.881 14:06:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.881 14:06:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.881 14:06:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4218100 kB' 'MemAvailable: 9482544 kB' 'Buffers: 40192 kB' 'Cached: 5323168 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4119772 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142824 kB' 'Active(file): 1374752 kB' 'Inactive(file): 3976948 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 161228 kB' 'Mapped: 68212 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302156 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68108 kB' 'KernelStack: 4436 kB' 'PageTables: 3404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 519396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.881 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.881 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.882 14:06:49 -- setup/common.sh@33 -- # echo 1024 00:04:57.882 14:06:49 -- setup/common.sh@33 -- # return 0 00:04:57.882 14:06:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:57.882 14:06:49 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.882 14:06:49 -- setup/hugepages.sh@27 -- # local node 00:04:57.882 14:06:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.882 14:06:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.882 14:06:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:57.882 14:06:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.882 14:06:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.882 14:06:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.882 14:06:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.882 14:06:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.882 14:06:49 -- setup/common.sh@18 -- # local node=0 00:04:57.882 14:06:49 -- setup/common.sh@19 -- # local var val 00:04:57.882 14:06:49 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.882 14:06:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.882 14:06:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.882 14:06:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.882 14:06:49 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.882 14:06:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4217848 kB' 'MemUsed: 8025120 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4119608 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 142660 kB' 'Active(file): 1374752 kB' 'Inactive(file): 3976948 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 5363360 kB' 'Mapped: 68212 kB' 'AnonPages: 161328 kB' 'Shmem: 2596 kB' 'KernelStack: 4416 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234048 kB' 'Slab: 302156 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.882 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.882 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # continue 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.883 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.883 14:06:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.143 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.143 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # continue 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.144 14:06:49 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.144 14:06:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.144 14:06:49 -- setup/common.sh@33 -- # echo 0 00:04:58.144 14:06:49 -- setup/common.sh@33 -- # return 0 00:04:58.144 14:06:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.144 14:06:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.144 14:06:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.144 14:06:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.144 14:06:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.144 node0=1024 expecting 1024 00:04:58.144 14:06:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.144 00:04:58.144 real 0m1.347s 00:04:58.144 user 0m0.303s 00:04:58.144 sys 0m0.985s 00:04:58.144 14:06:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:58.144 14:06:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.144 ************************************ 00:04:58.144 END TEST even_2G_alloc 00:04:58.144 ************************************ 00:04:58.144 14:06:49 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:58.144 14:06:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.144 14:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.144 14:06:50 -- common/autotest_common.sh@10 -- # set +x 00:04:58.144 ************************************ 00:04:58.144 START TEST odd_alloc 00:04:58.144 ************************************ 00:04:58.144 14:06:50 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:58.144 14:06:50 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:58.144 14:06:50 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:58.144 14:06:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.144 14:06:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.144 14:06:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:58.144 14:06:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.144 14:06:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.144 14:06:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.144 14:06:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:58.144 14:06:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:58.144 14:06:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.144 14:06:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.144 14:06:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.144 14:06:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.144 14:06:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.144 14:06:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:58.144 14:06:50 -- setup/hugepages.sh@83 -- # : 0 00:04:58.144 14:06:50 -- setup/hugepages.sh@84 -- # : 0 00:04:58.144 14:06:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.144 14:06:50 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:58.144 14:06:50 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:58.144 14:06:50 -- setup/hugepages.sh@160 -- # setup output 00:04:58.144 14:06:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.144 14:06:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:58.433 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:59.371 14:06:51 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:59.371 14:06:51 -- setup/hugepages.sh@89 -- # local node 00:04:59.371 14:06:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.371 14:06:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.371 14:06:51 -- setup/hugepages.sh@92 -- # local surp 00:04:59.371 14:06:51 -- setup/hugepages.sh@93 -- # local resv 00:04:59.371 14:06:51 -- setup/hugepages.sh@94 -- # local anon 00:04:59.371 14:06:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.371 14:06:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.371 14:06:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.371 14:06:51 -- setup/common.sh@18 -- # local node= 00:04:59.371 14:06:51 -- setup/common.sh@19 -- # local var val 00:04:59.371 14:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.371 14:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.371 14:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.371 14:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.371 14:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.371 14:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216004 kB' 'MemAvailable: 9480452 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375840 kB' 'Inactive: 4115480 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 138540 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157176 kB' 'Mapped: 67372 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302092 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68044 kB' 'KernelStack: 4388 kB' 'PageTables: 3268 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.371 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.371 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.372 14:06:51 -- setup/common.sh@33 -- # echo 0 00:04:59.372 14:06:51 -- setup/common.sh@33 -- # return 0 00:04:59.372 14:06:51 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.372 14:06:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.372 14:06:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.372 14:06:51 -- setup/common.sh@18 -- # local node= 00:04:59.372 14:06:51 -- setup/common.sh@19 -- # local var val 00:04:59.372 14:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.372 14:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.372 14:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.372 14:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.372 14:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.372 14:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216540 kB' 'MemAvailable: 9480988 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115584 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138644 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157272 kB' 'Mapped: 67368 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302092 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68044 kB' 'KernelStack: 4400 kB' 'PageTables: 3460 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.372 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.372 14:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.373 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.373 14:06:51 -- setup/common.sh@33 -- # echo 0 00:04:59.373 14:06:51 -- setup/common.sh@33 -- # return 0 00:04:59.373 14:06:51 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.373 14:06:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.373 14:06:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.373 14:06:51 -- setup/common.sh@18 -- # local node= 00:04:59.373 14:06:51 -- setup/common.sh@19 -- # local var val 00:04:59.373 14:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.373 14:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.373 14:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.373 14:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.373 14:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.373 14:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.373 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216540 kB' 'MemAvailable: 9480988 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115584 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138644 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157272 kB' 'Mapped: 67368 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302092 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68044 kB' 'KernelStack: 4400 kB' 'PageTables: 3460 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.374 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.374 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.375 14:06:51 -- setup/common.sh@33 -- # echo 0 00:04:59.375 14:06:51 -- setup/common.sh@33 -- # return 0 00:04:59.375 14:06:51 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.375 14:06:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:59.375 nr_hugepages=1025 00:04:59.375 14:06:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.375 resv_hugepages=0 00:04:59.375 14:06:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.375 surplus_hugepages=0 00:04:59.375 14:06:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.375 anon_hugepages=0 00:04:59.375 14:06:51 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.375 14:06:51 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:59.375 14:06:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.375 14:06:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.375 14:06:51 -- setup/common.sh@18 -- # local node= 00:04:59.375 14:06:51 -- setup/common.sh@19 -- # local var val 00:04:59.375 14:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.375 14:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.375 14:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.375 14:06:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.375 14:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.375 14:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4217556 kB' 'MemAvailable: 9482004 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115480 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138540 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157116 kB' 'Mapped: 67368 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302092 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68044 kB' 'KernelStack: 4436 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.375 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.375 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.376 14:06:51 -- setup/common.sh@33 -- # echo 1025 00:04:59.376 14:06:51 -- setup/common.sh@33 -- # return 0 00:04:59.376 14:06:51 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:59.376 14:06:51 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.376 14:06:51 -- setup/hugepages.sh@27 -- # local node 00:04:59.376 14:06:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.376 14:06:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:59.376 14:06:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.376 14:06:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.376 14:06:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.376 14:06:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.376 14:06:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.376 14:06:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.376 14:06:51 -- setup/common.sh@18 -- # local node=0 00:04:59.376 14:06:51 -- setup/common.sh@19 -- # local var val 00:04:59.376 14:06:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.376 14:06:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.376 14:06:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.376 14:06:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.376 14:06:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.376 14:06:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.376 14:06:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4217052 kB' 'MemUsed: 8025916 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115588 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138648 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 5363364 kB' 'Mapped: 67368 kB' 'AnonPages: 157172 kB' 'Shmem: 2596 kB' 'KernelStack: 4388 kB' 'PageTables: 3260 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234048 kB' 'Slab: 302092 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 68044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.376 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.376 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # continue 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.377 14:06:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.377 14:06:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.377 14:06:51 -- setup/common.sh@33 -- # echo 0 00:04:59.377 14:06:51 -- setup/common.sh@33 -- # return 0 00:04:59.377 14:06:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.377 14:06:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.377 14:06:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.377 14:06:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.377 14:06:51 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:59.377 node0=1025 expecting 1025 00:04:59.377 14:06:51 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:59.377 00:04:59.377 real 0m1.383s 00:04:59.377 user 0m0.337s 00:04:59.377 sys 0m0.973s 00:04:59.377 14:06:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:59.377 14:06:51 -- common/autotest_common.sh@10 -- # set +x 00:04:59.377 ************************************ 00:04:59.377 END TEST odd_alloc 00:04:59.377 ************************************ 00:04:59.637 14:06:51 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:59.637 14:06:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.637 14:06:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.637 14:06:51 -- common/autotest_common.sh@10 -- # set +x 00:04:59.637 ************************************ 00:04:59.637 START TEST custom_alloc 00:04:59.637 ************************************ 00:04:59.637 14:06:51 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:59.637 14:06:51 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:59.637 14:06:51 -- setup/hugepages.sh@169 -- # local node 00:04:59.637 14:06:51 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:59.637 14:06:51 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:59.637 14:06:51 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:59.637 14:06:51 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:59.637 14:06:51 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:59.637 14:06:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:59.637 14:06:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:59.637 14:06:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.637 14:06:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.637 14:06:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.637 14:06:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.637 14:06:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.637 14:06:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.637 14:06:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:59.637 14:06:51 -- setup/hugepages.sh@83 -- # : 0 00:04:59.637 14:06:51 -- setup/hugepages.sh@84 -- # : 0 00:04:59.637 14:06:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:59.637 14:06:51 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:59.637 14:06:51 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:59.637 14:06:51 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:59.637 14:06:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:59.637 14:06:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.637 14:06:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:59.637 14:06:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.637 14:06:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.637 14:06:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.637 14:06:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:59.637 14:06:51 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:59.637 14:06:51 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:59.637 14:06:51 -- setup/hugepages.sh@78 -- # return 0 00:04:59.637 14:06:51 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:59.637 14:06:51 -- setup/hugepages.sh@187 -- # setup output 00:04:59.637 14:06:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.637 14:06:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:59.896 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.466 14:06:52 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:00.466 14:06:52 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.466 14:06:52 -- setup/hugepages.sh@89 -- # local node 00:05:00.466 14:06:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.466 14:06:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.466 14:06:52 -- setup/hugepages.sh@92 -- # local surp 00:05:00.466 14:06:52 -- setup/hugepages.sh@93 -- # local resv 00:05:00.466 14:06:52 -- setup/hugepages.sh@94 -- # local anon 00:05:00.466 14:06:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.466 14:06:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.466 14:06:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.466 14:06:52 -- setup/common.sh@18 -- # local node= 00:05:00.466 14:06:52 -- setup/common.sh@19 -- # local var val 00:05:00.466 14:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.466 14:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.466 14:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.466 14:06:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.466 14:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.466 14:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5266448 kB' 'MemAvailable: 10530896 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375840 kB' 'Inactive: 4115744 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 138804 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157296 kB' 'Mapped: 67292 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301880 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67832 kB' 'KernelStack: 4392 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.466 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.466 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.467 14:06:52 -- setup/common.sh@33 -- # echo 0 00:05:00.467 14:06:52 -- setup/common.sh@33 -- # return 0 00:05:00.467 14:06:52 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.467 14:06:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.467 14:06:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.467 14:06:52 -- setup/common.sh@18 -- # local node= 00:05:00.467 14:06:52 -- setup/common.sh@19 -- # local var val 00:05:00.467 14:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.467 14:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.467 14:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.467 14:06:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.467 14:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.467 14:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5267224 kB' 'MemAvailable: 10531672 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115732 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138792 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157236 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301952 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67904 kB' 'KernelStack: 4384 kB' 'PageTables: 3404 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.467 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.467 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.468 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.468 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.468 14:06:52 -- setup/common.sh@33 -- # echo 0 00:05:00.468 14:06:52 -- setup/common.sh@33 -- # return 0 00:05:00.468 14:06:52 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.468 14:06:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.468 14:06:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.468 14:06:52 -- setup/common.sh@18 -- # local node= 00:05:00.468 14:06:52 -- setup/common.sh@19 -- # local var val 00:05:00.468 14:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.468 14:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.468 14:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.468 14:06:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.468 14:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.469 14:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5267204 kB' 'MemAvailable: 10531652 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115836 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138896 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157368 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301952 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67904 kB' 'KernelStack: 4368 kB' 'PageTables: 3364 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 508796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.469 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.469 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.470 14:06:52 -- setup/common.sh@33 -- # echo 0 00:05:00.470 14:06:52 -- setup/common.sh@33 -- # return 0 00:05:00.470 nr_hugepages=512 00:05:00.470 14:06:52 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.470 14:06:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:00.470 resv_hugepages=0 00:05:00.470 14:06:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.470 surplus_hugepages=0 00:05:00.470 14:06:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.470 anon_hugepages=0 00:05:00.470 14:06:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.470 14:06:52 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.470 14:06:52 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:00.470 14:06:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.470 14:06:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.470 14:06:52 -- setup/common.sh@18 -- # local node= 00:05:00.470 14:06:52 -- setup/common.sh@19 -- # local var val 00:05:00.470 14:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.470 14:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.470 14:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.470 14:06:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.470 14:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.470 14:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5266704 kB' 'MemAvailable: 10531152 kB' 'Buffers: 40192 kB' 'Cached: 5323172 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115760 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138820 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157264 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301984 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67936 kB' 'KernelStack: 4448 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 513584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.470 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.470 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.471 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.471 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.471 14:06:52 -- setup/common.sh@33 -- # echo 512 00:05:00.471 14:06:52 -- setup/common.sh@33 -- # return 0 00:05:00.471 14:06:52 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:00.471 14:06:52 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.471 14:06:52 -- setup/hugepages.sh@27 -- # local node 00:05:00.471 14:06:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.472 14:06:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.472 14:06:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.472 14:06:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.472 14:06:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.472 14:06:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.472 14:06:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.472 14:06:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.472 14:06:52 -- setup/common.sh@18 -- # local node=0 00:05:00.472 14:06:52 -- setup/common.sh@19 -- # local var val 00:05:00.472 14:06:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.472 14:06:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.472 14:06:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.472 14:06:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.472 14:06:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.472 14:06:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 5266708 kB' 'MemUsed: 6976260 kB' 'SwapCached: 0 kB' 'Active: 1375824 kB' 'Inactive: 4115700 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 138760 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 5363364 kB' 'Mapped: 67328 kB' 'AnonPages: 157228 kB' 'Shmem: 2596 kB' 'KernelStack: 4336 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234048 kB' 'Slab: 301984 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.472 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.472 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.473 14:06:52 -- setup/common.sh@32 -- # continue 00:05:00.473 14:06:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.473 14:06:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.473 14:06:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.473 14:06:52 -- setup/common.sh@33 -- # echo 0 00:05:00.473 14:06:52 -- setup/common.sh@33 -- # return 0 00:05:00.473 14:06:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.473 14:06:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.473 14:06:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.473 14:06:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.473 14:06:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.473 node0=512 expecting 512 00:05:00.473 14:06:52 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:00.473 00:05:00.473 real 0m1.047s 00:05:00.473 user 0m0.328s 00:05:00.473 sys 0m0.655s 00:05:00.473 14:06:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.473 14:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.473 ************************************ 00:05:00.473 END TEST custom_alloc 00:05:00.473 ************************************ 00:05:00.732 14:06:52 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.732 14:06:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.732 14:06:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.732 14:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.732 ************************************ 00:05:00.732 START TEST no_shrink_alloc 00:05:00.732 ************************************ 00:05:00.732 14:06:52 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:00.732 14:06:52 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:00.732 14:06:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.732 14:06:52 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.732 14:06:52 -- setup/hugepages.sh@51 -- # shift 00:05:00.733 14:06:52 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.733 14:06:52 -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.733 14:06:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.733 14:06:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.733 14:06:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.733 14:06:52 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.733 14:06:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.733 14:06:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.733 14:06:52 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.733 14:06:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.733 14:06:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.733 14:06:52 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.733 14:06:52 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.733 14:06:52 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.733 14:06:52 -- setup/hugepages.sh@73 -- # return 0 00:05:00.733 14:06:52 -- setup/hugepages.sh@198 -- # setup output 00:05:00.733 14:06:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.733 14:06:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:00.991 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.560 14:06:53 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:01.560 14:06:53 -- setup/hugepages.sh@89 -- # local node 00:05:01.560 14:06:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.560 14:06:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.560 14:06:53 -- setup/hugepages.sh@92 -- # local surp 00:05:01.560 14:06:53 -- setup/hugepages.sh@93 -- # local resv 00:05:01.560 14:06:53 -- setup/hugepages.sh@94 -- # local anon 00:05:01.560 14:06:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.560 14:06:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.560 14:06:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.560 14:06:53 -- setup/common.sh@18 -- # local node= 00:05:01.560 14:06:53 -- setup/common.sh@19 -- # local var val 00:05:01.561 14:06:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.561 14:06:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.561 14:06:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.561 14:06:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.561 14:06:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.561 14:06:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4220448 kB' 'MemAvailable: 9484896 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375840 kB' 'Inactive: 4115656 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 138716 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157416 kB' 'Mapped: 67436 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301856 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67808 kB' 'KernelStack: 4320 kB' 'PageTables: 3284 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.561 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.561 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.562 14:06:53 -- setup/common.sh@33 -- # echo 0 00:05:01.562 14:06:53 -- setup/common.sh@33 -- # return 0 00:05:01.562 14:06:53 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.562 14:06:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.562 14:06:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.562 14:06:53 -- setup/common.sh@18 -- # local node= 00:05:01.562 14:06:53 -- setup/common.sh@19 -- # local var val 00:05:01.562 14:06:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.562 14:06:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.562 14:06:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.562 14:06:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.562 14:06:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.562 14:06:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4220700 kB' 'MemAvailable: 9485148 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375840 kB' 'Inactive: 4115252 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 138312 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 156932 kB' 'Mapped: 67332 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301856 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67808 kB' 'KernelStack: 4288 kB' 'PageTables: 3204 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.562 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.562 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.563 14:06:53 -- setup/common.sh@33 -- # echo 0 00:05:01.563 14:06:53 -- setup/common.sh@33 -- # return 0 00:05:01.563 14:06:53 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.563 14:06:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.563 14:06:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.563 14:06:53 -- setup/common.sh@18 -- # local node= 00:05:01.563 14:06:53 -- setup/common.sh@19 -- # local var val 00:05:01.563 14:06:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.563 14:06:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.563 14:06:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.563 14:06:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.563 14:06:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.563 14:06:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4220700 kB' 'MemAvailable: 9485148 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115436 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138496 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157152 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301856 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67808 kB' 'KernelStack: 4304 kB' 'PageTables: 3232 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.563 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.563 14:06:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.564 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.564 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.564 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.564 14:06:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.564 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.564 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.564 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.564 14:06:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.564 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.564 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.823 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.824 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.824 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.825 14:06:53 -- setup/common.sh@33 -- # echo 0 00:05:01.825 14:06:53 -- setup/common.sh@33 -- # return 0 00:05:01.825 nr_hugepages=1024 00:05:01.825 resv_hugepages=0 00:05:01.825 surplus_hugepages=0 00:05:01.825 anon_hugepages=0 00:05:01.825 14:06:53 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.825 14:06:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.825 14:06:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.825 14:06:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.825 14:06:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.825 14:06:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.825 14:06:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.825 14:06:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.825 14:06:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.825 14:06:53 -- setup/common.sh@18 -- # local node= 00:05:01.825 14:06:53 -- setup/common.sh@19 -- # local var val 00:05:01.825 14:06:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.825 14:06:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.825 14:06:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.825 14:06:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.825 14:06:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.825 14:06:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4220700 kB' 'MemAvailable: 9485148 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115704 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138764 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157368 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301856 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67808 kB' 'KernelStack: 4356 kB' 'PageTables: 3192 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.825 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.825 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.826 14:06:53 -- setup/common.sh@33 -- # echo 1024 00:05:01.826 14:06:53 -- setup/common.sh@33 -- # return 0 00:05:01.826 14:06:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.826 14:06:53 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.826 14:06:53 -- setup/hugepages.sh@27 -- # local node 00:05:01.826 14:06:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.826 14:06:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.826 14:06:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.826 14:06:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.826 14:06:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.826 14:06:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.826 14:06:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.826 14:06:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.826 14:06:53 -- setup/common.sh@18 -- # local node=0 00:05:01.826 14:06:53 -- setup/common.sh@19 -- # local var val 00:05:01.826 14:06:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.826 14:06:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.826 14:06:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.826 14:06:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.826 14:06:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.826 14:06:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4221456 kB' 'MemUsed: 8021512 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115660 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138720 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 5363368 kB' 'Mapped: 67328 kB' 'AnonPages: 157324 kB' 'Shmem: 2596 kB' 'KernelStack: 4324 kB' 'PageTables: 3372 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234048 kB' 'Slab: 301856 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.826 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.826 14:06:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # continue 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.827 14:06:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.827 14:06:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.827 14:06:53 -- setup/common.sh@33 -- # echo 0 00:05:01.827 14:06:53 -- setup/common.sh@33 -- # return 0 00:05:01.827 node0=1024 expecting 1024 00:05:01.827 14:06:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.827 14:06:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.827 14:06:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.827 14:06:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.827 14:06:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.827 14:06:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.827 14:06:53 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:01.827 14:06:53 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:01.827 14:06:53 -- setup/hugepages.sh@202 -- # setup output 00:05:01.827 14:06:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.827 14:06:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.086 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.086 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:02.086 14:06:54 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:02.086 14:06:54 -- setup/hugepages.sh@89 -- # local node 00:05:02.086 14:06:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.086 14:06:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.086 14:06:54 -- setup/hugepages.sh@92 -- # local surp 00:05:02.086 14:06:54 -- setup/hugepages.sh@93 -- # local resv 00:05:02.086 14:06:54 -- setup/hugepages.sh@94 -- # local anon 00:05:02.086 14:06:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.086 14:06:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.086 14:06:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.086 14:06:54 -- setup/common.sh@18 -- # local node= 00:05:02.086 14:06:54 -- setup/common.sh@19 -- # local var val 00:05:02.086 14:06:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.086 14:06:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.086 14:06:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.086 14:06:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.086 14:06:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.086 14:06:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216704 kB' 'MemAvailable: 9481152 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375840 kB' 'Inactive: 4116260 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 139320 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157848 kB' 'Mapped: 67632 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302016 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67968 kB' 'KernelStack: 4536 kB' 'PageTables: 4096 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.086 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.086 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.348 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.348 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.349 14:06:54 -- setup/common.sh@33 -- # echo 0 00:05:02.349 14:06:54 -- setup/common.sh@33 -- # return 0 00:05:02.349 14:06:54 -- setup/hugepages.sh@97 -- # anon=0 00:05:02.349 14:06:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.349 14:06:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.349 14:06:54 -- setup/common.sh@18 -- # local node= 00:05:02.349 14:06:54 -- setup/common.sh@19 -- # local var val 00:05:02.349 14:06:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.349 14:06:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.349 14:06:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.349 14:06:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.349 14:06:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.349 14:06:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216704 kB' 'MemAvailable: 9481152 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375840 kB' 'Inactive: 4115944 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 139004 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976940 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157436 kB' 'Mapped: 67292 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 301944 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67896 kB' 'KernelStack: 4396 kB' 'PageTables: 3544 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.349 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.349 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.350 14:06:54 -- setup/common.sh@33 -- # echo 0 00:05:02.350 14:06:54 -- setup/common.sh@33 -- # return 0 00:05:02.350 14:06:54 -- setup/hugepages.sh@99 -- # surp=0 00:05:02.350 14:06:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.350 14:06:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.350 14:06:54 -- setup/common.sh@18 -- # local node= 00:05:02.350 14:06:54 -- setup/common.sh@19 -- # local var val 00:05:02.350 14:06:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.350 14:06:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.350 14:06:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.350 14:06:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.350 14:06:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.350 14:06:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216956 kB' 'MemAvailable: 9481408 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115700 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138756 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976944 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157208 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302024 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67976 kB' 'KernelStack: 4368 kB' 'PageTables: 3336 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.350 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.350 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.351 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.351 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.351 14:06:54 -- setup/common.sh@33 -- # echo 0 00:05:02.352 14:06:54 -- setup/common.sh@33 -- # return 0 00:05:02.352 14:06:54 -- setup/hugepages.sh@100 -- # resv=0 00:05:02.352 14:06:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.352 nr_hugepages=1024 00:05:02.352 14:06:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.352 resv_hugepages=0 00:05:02.352 14:06:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.352 surplus_hugepages=0 00:05:02.352 14:06:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.352 anon_hugepages=0 00:05:02.352 14:06:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.352 14:06:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.352 14:06:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.352 14:06:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.352 14:06:54 -- setup/common.sh@18 -- # local node= 00:05:02.352 14:06:54 -- setup/common.sh@19 -- # local var val 00:05:02.352 14:06:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.352 14:06:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.352 14:06:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.352 14:06:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.352 14:06:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.352 14:06:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216956 kB' 'MemAvailable: 9481408 kB' 'Buffers: 40192 kB' 'Cached: 5323176 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115604 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138660 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976944 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 157072 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 234048 kB' 'Slab: 302024 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67976 kB' 'KernelStack: 4336 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 509124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.352 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.352 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.353 14:06:54 -- setup/common.sh@33 -- # echo 1024 00:05:02.353 14:06:54 -- setup/common.sh@33 -- # return 0 00:05:02.353 14:06:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.353 14:06:54 -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.353 14:06:54 -- setup/hugepages.sh@27 -- # local node 00:05:02.353 14:06:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.353 14:06:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.353 14:06:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.353 14:06:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.353 14:06:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.353 14:06:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.353 14:06:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.353 14:06:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.353 14:06:54 -- setup/common.sh@18 -- # local node=0 00:05:02.353 14:06:54 -- setup/common.sh@19 -- # local var val 00:05:02.353 14:06:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:02.353 14:06:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.353 14:06:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.353 14:06:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.353 14:06:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.353 14:06:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242968 kB' 'MemFree: 4216956 kB' 'MemUsed: 8026012 kB' 'SwapCached: 0 kB' 'Active: 1375832 kB' 'Inactive: 4115772 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 138828 kB' 'Active(file): 1374764 kB' 'Inactive(file): 3976944 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 5363368 kB' 'Mapped: 67288 kB' 'AnonPages: 156968 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3136 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234048 kB' 'Slab: 302024 kB' 'SReclaimable: 234048 kB' 'SUnreclaim: 67976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.353 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.353 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # continue 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:02.354 14:06:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:02.354 14:06:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.354 14:06:54 -- setup/common.sh@33 -- # echo 0 00:05:02.354 14:06:54 -- setup/common.sh@33 -- # return 0 00:05:02.354 14:06:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.354 14:06:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.354 14:06:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.354 14:06:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.354 14:06:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.354 node0=1024 expecting 1024 00:05:02.354 14:06:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.354 00:05:02.354 real 0m1.839s 00:05:02.354 user 0m0.658s 00:05:02.354 sys 0m1.062s 00:05:02.354 14:06:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.354 14:06:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.354 ************************************ 00:05:02.354 END TEST no_shrink_alloc 00:05:02.354 ************************************ 00:05:02.613 14:06:54 -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.613 14:06:54 -- setup/hugepages.sh@37 -- # local node hp 00:05:02.613 14:06:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.613 14:06:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.613 14:06:54 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.613 14:06:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.614 14:06:54 -- setup/hugepages.sh@41 -- # echo 0 00:05:02.614 14:06:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.614 14:06:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.614 ************************************ 00:05:02.614 END TEST hugepages 00:05:02.614 ************************************ 00:05:02.614 00:05:02.614 real 0m8.621s 00:05:02.614 user 0m2.657s 00:05:02.614 sys 0m5.542s 00:05:02.614 14:06:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:02.614 14:06:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.614 14:06:54 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:02.614 14:06:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.614 14:06:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.614 14:06:54 -- common/autotest_common.sh@10 -- # set +x 00:05:02.614 ************************************ 00:05:02.614 START TEST driver 00:05:02.614 ************************************ 00:05:02.614 14:06:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:02.614 * Looking for test storage... 00:05:02.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:02.614 14:06:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.614 14:06:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.614 14:06:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.873 14:06:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.873 14:06:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.873 14:06:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.873 14:06:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.873 14:06:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.873 14:06:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.873 14:06:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.873 14:06:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.873 14:06:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.873 14:06:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.873 14:06:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.873 14:06:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.873 14:06:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.873 14:06:54 -- scripts/common.sh@344 -- # : 1 00:05:02.873 14:06:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.873 14:06:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.873 14:06:54 -- scripts/common.sh@364 -- # decimal 1 00:05:02.873 14:06:54 -- scripts/common.sh@352 -- # local d=1 00:05:02.873 14:06:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.873 14:06:54 -- scripts/common.sh@354 -- # echo 1 00:05:02.873 14:06:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.873 14:06:54 -- scripts/common.sh@365 -- # decimal 2 00:05:02.873 14:06:54 -- scripts/common.sh@352 -- # local d=2 00:05:02.873 14:06:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.873 14:06:54 -- scripts/common.sh@354 -- # echo 2 00:05:02.873 14:06:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.873 14:06:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.873 14:06:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.873 14:06:54 -- scripts/common.sh@367 -- # return 0 00:05:02.873 14:06:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.873 14:06:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.873 --rc genhtml_branch_coverage=1 00:05:02.873 --rc genhtml_function_coverage=1 00:05:02.873 --rc genhtml_legend=1 00:05:02.873 --rc geninfo_all_blocks=1 00:05:02.873 --rc geninfo_unexecuted_blocks=1 00:05:02.873 00:05:02.873 ' 00:05:02.873 14:06:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.873 --rc genhtml_branch_coverage=1 00:05:02.873 --rc genhtml_function_coverage=1 00:05:02.873 --rc genhtml_legend=1 00:05:02.873 --rc geninfo_all_blocks=1 00:05:02.873 --rc geninfo_unexecuted_blocks=1 00:05:02.873 00:05:02.873 ' 00:05:02.873 14:06:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.873 --rc genhtml_branch_coverage=1 00:05:02.873 --rc genhtml_function_coverage=1 00:05:02.873 --rc genhtml_legend=1 00:05:02.873 --rc geninfo_all_blocks=1 00:05:02.873 --rc geninfo_unexecuted_blocks=1 00:05:02.873 00:05:02.873 ' 00:05:02.873 14:06:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.873 --rc genhtml_branch_coverage=1 00:05:02.873 --rc genhtml_function_coverage=1 00:05:02.873 --rc genhtml_legend=1 00:05:02.873 --rc geninfo_all_blocks=1 00:05:02.873 --rc geninfo_unexecuted_blocks=1 00:05:02.873 00:05:02.873 ' 00:05:02.873 14:06:54 -- setup/driver.sh@68 -- # setup reset 00:05:02.873 14:06:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.873 14:06:54 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.132 14:06:55 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:03.132 14:06:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.132 14:06:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.132 14:06:55 -- common/autotest_common.sh@10 -- # set +x 00:05:03.132 ************************************ 00:05:03.132 START TEST guess_driver 00:05:03.132 ************************************ 00:05:03.132 14:06:55 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:03.132 14:06:55 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:03.132 14:06:55 -- setup/driver.sh@47 -- # local fail=0 00:05:03.132 14:06:55 -- setup/driver.sh@49 -- # pick_driver 00:05:03.132 14:06:55 -- setup/driver.sh@36 -- # vfio 00:05:03.132 14:06:55 -- setup/driver.sh@21 -- # local iommu_grups 00:05:03.132 14:06:55 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:03.132 14:06:55 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:03.132 14:06:55 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:03.132 14:06:55 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:03.132 14:06:55 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:03.132 14:06:55 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:03.132 14:06:55 -- setup/driver.sh@32 -- # return 1 00:05:03.132 14:06:55 -- setup/driver.sh@38 -- # uio 00:05:03.132 14:06:55 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:03.132 14:06:55 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:03.132 14:06:55 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:03.132 14:06:55 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:03.132 14:06:55 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:03.132 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:03.132 14:06:55 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:03.132 Looking for driver=uio_pci_generic 00:05:03.132 14:06:55 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:03.132 14:06:55 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:03.132 14:06:55 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:03.132 14:06:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.132 14:06:55 -- setup/driver.sh@45 -- # setup output config 00:05:03.132 14:06:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.132 14:06:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.700 14:06:55 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:03.700 14:06:55 -- setup/driver.sh@58 -- # continue 00:05:03.700 14:06:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.700 14:06:55 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:03.700 14:06:55 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:03.700 14:06:55 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.079 14:06:56 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:05.079 14:06:56 -- setup/driver.sh@65 -- # setup reset 00:05:05.079 14:06:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.079 14:06:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.338 ************************************ 00:05:05.338 END TEST guess_driver 00:05:05.338 ************************************ 00:05:05.338 00:05:05.338 real 0m2.030s 00:05:05.338 user 0m0.456s 00:05:05.338 sys 0m1.581s 00:05:05.338 14:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.338 14:06:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.338 ************************************ 00:05:05.338 END TEST driver 00:05:05.338 ************************************ 00:05:05.338 00:05:05.338 real 0m2.713s 00:05:05.338 user 0m0.792s 00:05:05.338 sys 0m1.946s 00:05:05.338 14:06:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.338 14:06:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.338 14:06:57 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:05.338 14:06:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.338 14:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.338 14:06:57 -- common/autotest_common.sh@10 -- # set +x 00:05:05.338 ************************************ 00:05:05.338 START TEST devices 00:05:05.338 ************************************ 00:05:05.338 14:06:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:05.338 * Looking for test storage... 00:05:05.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:05.338 14:06:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.338 14:06:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.338 14:06:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.598 14:06:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.598 14:06:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.598 14:06:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.598 14:06:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.598 14:06:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.598 14:06:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.598 14:06:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.598 14:06:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.598 14:06:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.598 14:06:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.598 14:06:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.598 14:06:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.598 14:06:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.598 14:06:57 -- scripts/common.sh@344 -- # : 1 00:05:05.598 14:06:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.598 14:06:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.598 14:06:57 -- scripts/common.sh@364 -- # decimal 1 00:05:05.598 14:06:57 -- scripts/common.sh@352 -- # local d=1 00:05:05.598 14:06:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.598 14:06:57 -- scripts/common.sh@354 -- # echo 1 00:05:05.598 14:06:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.598 14:06:57 -- scripts/common.sh@365 -- # decimal 2 00:05:05.598 14:06:57 -- scripts/common.sh@352 -- # local d=2 00:05:05.598 14:06:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.598 14:06:57 -- scripts/common.sh@354 -- # echo 2 00:05:05.598 14:06:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.598 14:06:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.598 14:06:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.598 14:06:57 -- scripts/common.sh@367 -- # return 0 00:05:05.598 14:06:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.598 14:06:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.598 --rc genhtml_branch_coverage=1 00:05:05.598 --rc genhtml_function_coverage=1 00:05:05.598 --rc genhtml_legend=1 00:05:05.598 --rc geninfo_all_blocks=1 00:05:05.598 --rc geninfo_unexecuted_blocks=1 00:05:05.598 00:05:05.598 ' 00:05:05.598 14:06:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.598 --rc genhtml_branch_coverage=1 00:05:05.598 --rc genhtml_function_coverage=1 00:05:05.598 --rc genhtml_legend=1 00:05:05.598 --rc geninfo_all_blocks=1 00:05:05.598 --rc geninfo_unexecuted_blocks=1 00:05:05.598 00:05:05.598 ' 00:05:05.598 14:06:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.598 --rc genhtml_branch_coverage=1 00:05:05.598 --rc genhtml_function_coverage=1 00:05:05.598 --rc genhtml_legend=1 00:05:05.598 --rc geninfo_all_blocks=1 00:05:05.598 --rc geninfo_unexecuted_blocks=1 00:05:05.598 00:05:05.598 ' 00:05:05.598 14:06:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.598 --rc genhtml_branch_coverage=1 00:05:05.598 --rc genhtml_function_coverage=1 00:05:05.598 --rc genhtml_legend=1 00:05:05.598 --rc geninfo_all_blocks=1 00:05:05.598 --rc geninfo_unexecuted_blocks=1 00:05:05.598 00:05:05.598 ' 00:05:05.598 14:06:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:05.598 14:06:57 -- setup/devices.sh@192 -- # setup reset 00:05:05.598 14:06:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.598 14:06:57 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.857 14:06:57 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:05.857 14:06:57 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:05.857 14:06:57 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:05.857 14:06:57 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:05.857 14:06:57 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:05.857 14:06:57 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:05.857 14:06:57 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:05.857 14:06:57 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.857 14:06:57 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:05.857 14:06:57 -- setup/devices.sh@196 -- # blocks=() 00:05:05.857 14:06:57 -- setup/devices.sh@196 -- # declare -a blocks 00:05:05.857 14:06:57 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:05.857 14:06:57 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:05.857 14:06:57 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:05.857 14:06:57 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.857 14:06:57 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:05.857 14:06:57 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:05.857 14:06:57 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:05.857 14:06:57 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:05.857 14:06:57 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:05.857 14:06:57 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:05.857 14:06:57 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:06.116 No valid GPT data, bailing 00:05:06.116 14:06:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:06.116 14:06:57 -- scripts/common.sh@393 -- # pt= 00:05:06.116 14:06:57 -- scripts/common.sh@394 -- # return 1 00:05:06.116 14:06:57 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:06.116 14:06:57 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:06.116 14:06:57 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:06.116 14:06:57 -- setup/common.sh@80 -- # echo 5368709120 00:05:06.116 14:06:57 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:06.116 14:06:57 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:06.116 14:06:57 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:06.116 14:06:57 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:06.116 14:06:57 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:06.116 14:06:57 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:06.116 14:06:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.116 14:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.116 14:06:57 -- common/autotest_common.sh@10 -- # set +x 00:05:06.116 ************************************ 00:05:06.116 START TEST nvme_mount 00:05:06.116 ************************************ 00:05:06.116 14:06:58 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:06.116 14:06:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:06.116 14:06:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:06.116 14:06:58 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:06.116 14:06:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:06.116 14:06:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:06.116 14:06:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.116 14:06:58 -- setup/common.sh@40 -- # local part_no=1 00:05:06.116 14:06:58 -- setup/common.sh@41 -- # local size=1073741824 00:05:06.116 14:06:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.116 14:06:58 -- setup/common.sh@44 -- # parts=() 00:05:06.116 14:06:58 -- setup/common.sh@44 -- # local parts 00:05:06.116 14:06:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.116 14:06:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.116 14:06:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.116 14:06:58 -- setup/common.sh@46 -- # (( part++ )) 00:05:06.116 14:06:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.117 14:06:58 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:06.117 14:06:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.117 14:06:58 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:07.053 Creating new GPT entries in memory. 00:05:07.054 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:07.054 other utilities. 00:05:07.054 14:06:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:07.054 14:06:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.054 14:06:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:07.054 14:06:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:07.054 14:06:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:08.430 Creating new GPT entries in memory. 00:05:08.430 The operation has completed successfully. 00:05:08.430 14:07:00 -- setup/common.sh@57 -- # (( part++ )) 00:05:08.430 14:07:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.430 14:07:00 -- setup/common.sh@62 -- # wait 108153 00:05:08.430 14:07:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.430 14:07:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:08.430 14:07:00 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.430 14:07:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:08.430 14:07:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:08.430 14:07:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.430 14:07:00 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.430 14:07:00 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:08.430 14:07:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:08.430 14:07:00 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:08.430 14:07:00 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:08.430 14:07:00 -- setup/devices.sh@53 -- # local found=0 00:05:08.430 14:07:00 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.430 14:07:00 -- setup/devices.sh@56 -- # : 00:05:08.430 14:07:00 -- setup/devices.sh@59 -- # local pci status 00:05:08.430 14:07:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.430 14:07:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:08.430 14:07:00 -- setup/devices.sh@47 -- # setup output config 00:05:08.430 14:07:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.430 14:07:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.430 14:07:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.430 14:07:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:08.430 14:07:00 -- setup/devices.sh@63 -- # found=1 00:05:08.430 14:07:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.430 14:07:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.430 14:07:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.430 14:07:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:08.430 14:07:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.809 14:07:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.809 14:07:01 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:09.809 14:07:01 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.809 14:07:01 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.809 14:07:01 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.809 14:07:01 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:09.809 14:07:01 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.809 14:07:01 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.809 14:07:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.809 14:07:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:09.809 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.809 14:07:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.809 14:07:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.809 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.809 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:09.809 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:09.809 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:09.809 14:07:01 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:09.809 14:07:01 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:09.809 14:07:01 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.809 14:07:01 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:09.809 14:07:01 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:09.809 14:07:01 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.809 14:07:01 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.809 14:07:01 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:09.809 14:07:01 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:09.809 14:07:01 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:09.809 14:07:01 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:09.809 14:07:01 -- setup/devices.sh@53 -- # local found=0 00:05:09.809 14:07:01 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:09.809 14:07:01 -- setup/devices.sh@56 -- # : 00:05:09.809 14:07:01 -- setup/devices.sh@59 -- # local pci status 00:05:09.809 14:07:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.809 14:07:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:09.809 14:07:01 -- setup/devices.sh@47 -- # setup output config 00:05:09.809 14:07:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.809 14:07:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:09.809 14:07:01 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.809 14:07:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:09.809 14:07:01 -- setup/devices.sh@63 -- # found=1 00:05:09.809 14:07:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.809 14:07:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.809 14:07:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.809 14:07:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:09.809 14:07:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.739 14:07:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.739 14:07:03 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:11.739 14:07:03 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.739 14:07:03 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.739 14:07:03 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:11.739 14:07:03 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:11.739 14:07:03 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:11.739 14:07:03 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:11.739 14:07:03 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:11.739 14:07:03 -- setup/devices.sh@50 -- # local mount_point= 00:05:11.739 14:07:03 -- setup/devices.sh@51 -- # local test_file= 00:05:11.739 14:07:03 -- setup/devices.sh@53 -- # local found=0 00:05:11.739 14:07:03 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:11.739 14:07:03 -- setup/devices.sh@59 -- # local pci status 00:05:11.739 14:07:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.739 14:07:03 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:11.739 14:07:03 -- setup/devices.sh@47 -- # setup output config 00:05:11.739 14:07:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.739 14:07:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.739 14:07:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:11.739 14:07:03 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:11.739 14:07:03 -- setup/devices.sh@63 -- # found=1 00:05:11.739 14:07:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.739 14:07:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:11.739 14:07:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.739 14:07:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:11.739 14:07:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.641 14:07:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.641 14:07:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.641 14:07:05 -- setup/devices.sh@68 -- # return 0 00:05:13.641 14:07:05 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:13.641 14:07:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:13.641 14:07:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.641 14:07:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.641 14:07:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.641 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.641 00:05:13.641 real 0m7.234s 00:05:13.641 user 0m0.693s 00:05:13.641 sys 0m4.534s 00:05:13.641 14:07:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.641 ************************************ 00:05:13.641 14:07:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.641 END TEST nvme_mount 00:05:13.641 ************************************ 00:05:13.641 14:07:05 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:13.641 14:07:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.641 14:07:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.641 14:07:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.641 ************************************ 00:05:13.641 START TEST dm_mount 00:05:13.641 ************************************ 00:05:13.641 14:07:05 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:13.641 14:07:05 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:13.641 14:07:05 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:13.641 14:07:05 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:13.641 14:07:05 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:13.641 14:07:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:13.641 14:07:05 -- setup/common.sh@40 -- # local part_no=2 00:05:13.641 14:07:05 -- setup/common.sh@41 -- # local size=1073741824 00:05:13.641 14:07:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:13.641 14:07:05 -- setup/common.sh@44 -- # parts=() 00:05:13.641 14:07:05 -- setup/common.sh@44 -- # local parts 00:05:13.641 14:07:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:13.641 14:07:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.641 14:07:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.641 14:07:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:13.641 14:07:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.641 14:07:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:13.641 14:07:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:13.641 14:07:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:13.641 14:07:05 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:13.641 14:07:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:13.641 14:07:05 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:14.578 Creating new GPT entries in memory. 00:05:14.578 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:14.578 other utilities. 00:05:14.578 14:07:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:14.578 14:07:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:14.578 14:07:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:14.578 14:07:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:14.578 14:07:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:15.513 Creating new GPT entries in memory. 00:05:15.513 The operation has completed successfully. 00:05:15.513 14:07:07 -- setup/common.sh@57 -- # (( part++ )) 00:05:15.513 14:07:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.513 14:07:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.513 14:07:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.513 14:07:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:16.448 The operation has completed successfully. 00:05:16.448 14:07:08 -- setup/common.sh@57 -- # (( part++ )) 00:05:16.448 14:07:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.448 14:07:08 -- setup/common.sh@62 -- # wait 108656 00:05:16.448 14:07:08 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:16.448 14:07:08 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.448 14:07:08 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:16.448 14:07:08 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:16.448 14:07:08 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:16.448 14:07:08 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.448 14:07:08 -- setup/devices.sh@161 -- # break 00:05:16.448 14:07:08 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.448 14:07:08 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:16.448 14:07:08 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:16.448 14:07:08 -- setup/devices.sh@166 -- # dm=dm-0 00:05:16.448 14:07:08 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:16.448 14:07:08 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:16.448 14:07:08 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.448 14:07:08 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:16.448 14:07:08 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.448 14:07:08 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:16.448 14:07:08 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:16.448 14:07:08 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.448 14:07:08 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:16.448 14:07:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:16.448 14:07:08 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:16.448 14:07:08 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:16.448 14:07:08 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:16.448 14:07:08 -- setup/devices.sh@53 -- # local found=0 00:05:16.448 14:07:08 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:16.448 14:07:08 -- setup/devices.sh@56 -- # : 00:05:16.448 14:07:08 -- setup/devices.sh@59 -- # local pci status 00:05:16.448 14:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.448 14:07:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:16.448 14:07:08 -- setup/devices.sh@47 -- # setup output config 00:05:16.448 14:07:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.448 14:07:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.706 14:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.706 14:07:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:16.706 14:07:08 -- setup/devices.sh@63 -- # found=1 00:05:16.706 14:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.706 14:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.706 14:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.965 14:07:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:16.965 14:07:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.341 14:07:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.341 14:07:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:18.341 14:07:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.341 14:07:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.341 14:07:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:18.341 14:07:10 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:18.341 14:07:10 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:18.341 14:07:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:18.341 14:07:10 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:18.341 14:07:10 -- setup/devices.sh@50 -- # local mount_point= 00:05:18.341 14:07:10 -- setup/devices.sh@51 -- # local test_file= 00:05:18.341 14:07:10 -- setup/devices.sh@53 -- # local found=0 00:05:18.341 14:07:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:18.341 14:07:10 -- setup/devices.sh@59 -- # local pci status 00:05:18.341 14:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.341 14:07:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:18.341 14:07:10 -- setup/devices.sh@47 -- # setup output config 00:05:18.341 14:07:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.341 14:07:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:18.600 14:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.600 14:07:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:18.600 14:07:10 -- setup/devices.sh@63 -- # found=1 00:05:18.600 14:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.600 14:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.600 14:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.600 14:07:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:18.600 14:07:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.976 14:07:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.976 14:07:12 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:19.976 14:07:12 -- setup/devices.sh@68 -- # return 0 00:05:19.976 14:07:12 -- setup/devices.sh@187 -- # cleanup_dm 00:05:19.976 14:07:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:19.976 14:07:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:19.976 14:07:12 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:20.235 14:07:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:20.235 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:20.235 14:07:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:20.235 00:05:20.235 real 0m6.810s 00:05:20.235 user 0m0.480s 00:05:20.235 sys 0m3.236s 00:05:20.235 ************************************ 00:05:20.235 END TEST dm_mount 00:05:20.235 ************************************ 00:05:20.235 14:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.235 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:05:20.235 14:07:12 -- setup/devices.sh@1 -- # cleanup 00:05:20.235 14:07:12 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:20.235 14:07:12 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:20.235 14:07:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:20.235 14:07:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:20.235 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.235 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:20.235 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:20.235 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:20.235 14:07:12 -- setup/devices.sh@12 -- # cleanup_dm 00:05:20.235 14:07:12 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:20.235 14:07:12 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:20.235 14:07:12 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:20.235 14:07:12 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:20.235 ************************************ 00:05:20.235 END TEST devices 00:05:20.235 ************************************ 00:05:20.235 00:05:20.235 real 0m14.922s 00:05:20.235 user 0m1.668s 00:05:20.235 sys 0m8.139s 00:05:20.235 14:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.235 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:05:20.235 00:05:20.235 real 0m32.510s 00:05:20.235 user 0m6.911s 00:05:20.235 sys 0m20.182s 00:05:20.235 14:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:20.235 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:05:20.235 ************************************ 00:05:20.235 END TEST setup.sh 00:05:20.235 ************************************ 00:05:20.235 14:07:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:20.493 Hugepages 00:05:20.493 node hugesize free / total 00:05:20.493 node0 1048576kB 0 / 0 00:05:20.493 node0 2048kB 2048 / 2048 00:05:20.493 00:05:20.493 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.493 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:20.753 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:20.753 14:07:12 -- spdk/autotest.sh@128 -- # uname -s 00:05:20.753 14:07:12 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:20.753 14:07:12 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:20.753 14:07:12 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.011 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:21.270 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:22.648 14:07:14 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:23.585 14:07:15 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:23.585 14:07:15 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:23.585 14:07:15 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:23.585 14:07:15 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:23.585 14:07:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:23.585 14:07:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:23.585 14:07:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:23.585 14:07:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:23.585 14:07:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:23.585 14:07:15 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:23.585 14:07:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:05:23.585 14:07:15 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:23.844 Waiting for block devices as requested 00:05:24.103 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:24.103 14:07:16 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:24.103 14:07:16 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:24.103 14:07:16 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:24.103 14:07:16 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:24.103 14:07:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:24.103 14:07:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:24.103 14:07:16 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:24.103 14:07:16 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:24.103 14:07:16 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:24.103 14:07:16 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:24.103 14:07:16 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:24.103 14:07:16 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:24.103 14:07:16 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:24.103 14:07:16 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:24.103 14:07:16 -- common/autotest_common.sh@1552 -- # continue 00:05:24.103 14:07:16 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:24.103 14:07:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.103 14:07:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.103 14:07:16 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:24.103 14:07:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.103 14:07:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.103 14:07:16 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.693 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:24.693 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:25.641 14:07:17 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:25.641 14:07:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.641 14:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.641 14:07:17 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:25.641 14:07:17 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:25.641 14:07:17 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:25.641 14:07:17 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:25.641 14:07:17 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:25.641 14:07:17 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:25.641 14:07:17 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:25.641 14:07:17 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:25.641 14:07:17 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.641 14:07:17 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:25.641 14:07:17 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:25.641 14:07:17 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:25.641 14:07:17 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:05:25.641 14:07:17 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:25.641 14:07:17 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:25.641 14:07:17 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:25.641 14:07:17 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:25.641 14:07:17 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:25.641 14:07:17 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:25.641 14:07:17 -- common/autotest_common.sh@1588 -- # return 0 00:05:25.641 14:07:17 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:05:25.641 14:07:17 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:25.641 14:07:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.641 14:07:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.641 14:07:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.901 ************************************ 00:05:25.901 START TEST unittest 00:05:25.901 ************************************ 00:05:25.901 14:07:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:25.901 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:25.901 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:25.901 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:25.901 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:25.901 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:25.901 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:25.901 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:25.901 ++ rpc_py=rpc_cmd 00:05:25.901 ++ set -e 00:05:25.901 ++ shopt -s nullglob 00:05:25.901 ++ shopt -s extglob 00:05:25.901 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:25.901 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:25.901 +++ CONFIG_WPDK_DIR= 00:05:25.901 +++ CONFIG_ASAN=y 00:05:25.901 +++ CONFIG_VBDEV_COMPRESS=n 00:05:25.901 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:25.901 +++ CONFIG_USDT=n 00:05:25.901 +++ CONFIG_CUSTOMOCF=n 00:05:25.901 +++ CONFIG_PREFIX=/usr/local 00:05:25.901 +++ CONFIG_RBD=n 00:05:25.901 +++ CONFIG_LIBDIR= 00:05:25.901 +++ CONFIG_IDXD=y 00:05:25.901 +++ CONFIG_NVME_CUSE=y 00:05:25.901 +++ CONFIG_SMA=n 00:05:25.901 +++ CONFIG_VTUNE=n 00:05:25.901 +++ CONFIG_TSAN=n 00:05:25.901 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:25.901 +++ CONFIG_VFIO_USER_DIR= 00:05:25.901 +++ CONFIG_PGO_CAPTURE=n 00:05:25.901 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:25.901 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:25.901 +++ CONFIG_LTO=n 00:05:25.901 +++ CONFIG_ISCSI_INITIATOR=y 00:05:25.901 +++ CONFIG_CET=n 00:05:25.901 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:25.901 +++ CONFIG_OCF_PATH= 00:05:25.901 +++ CONFIG_RDMA_SET_TOS=y 00:05:25.901 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:25.901 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:25.901 +++ CONFIG_UBLK=n 00:05:25.901 +++ CONFIG_ISAL_CRYPTO=y 00:05:25.901 +++ CONFIG_OPENSSL_PATH= 00:05:25.901 +++ CONFIG_OCF=n 00:05:25.901 +++ CONFIG_FUSE=n 00:05:25.901 +++ CONFIG_VTUNE_DIR= 00:05:25.901 +++ CONFIG_FUZZER_LIB= 00:05:25.901 +++ CONFIG_FUZZER=n 00:05:25.901 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:05:25.901 +++ CONFIG_CRYPTO=n 00:05:25.901 +++ CONFIG_PGO_USE=n 00:05:25.901 +++ CONFIG_VHOST=y 00:05:25.901 +++ CONFIG_DAOS=n 00:05:25.901 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:05:25.901 +++ CONFIG_DAOS_DIR= 00:05:25.901 +++ CONFIG_UNIT_TESTS=y 00:05:25.901 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:25.901 +++ CONFIG_VIRTIO=y 00:05:25.901 +++ CONFIG_COVERAGE=y 00:05:25.901 +++ CONFIG_RDMA=y 00:05:25.901 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:25.901 +++ CONFIG_URING_PATH= 00:05:25.901 +++ CONFIG_XNVME=n 00:05:25.901 +++ CONFIG_VFIO_USER=n 00:05:25.901 +++ CONFIG_ARCH=native 00:05:25.901 +++ CONFIG_URING_ZNS=n 00:05:25.901 +++ CONFIG_WERROR=y 00:05:25.901 +++ CONFIG_HAVE_LIBBSD=n 00:05:25.901 +++ CONFIG_UBSAN=y 00:05:25.901 +++ CONFIG_IPSEC_MB_DIR= 00:05:25.901 +++ CONFIG_GOLANG=n 00:05:25.901 +++ CONFIG_ISAL=y 00:05:25.901 +++ CONFIG_IDXD_KERNEL=n 00:05:25.901 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:25.901 +++ CONFIG_RDMA_PROV=verbs 00:05:25.901 +++ CONFIG_APPS=y 00:05:25.901 +++ CONFIG_SHARED=n 00:05:25.901 +++ CONFIG_FC_PATH= 00:05:25.901 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:25.901 +++ CONFIG_FC=n 00:05:25.901 +++ CONFIG_AVAHI=n 00:05:25.901 +++ CONFIG_FIO_PLUGIN=y 00:05:25.901 +++ CONFIG_RAID5F=y 00:05:25.901 +++ CONFIG_EXAMPLES=y 00:05:25.901 +++ CONFIG_TESTS=y 00:05:25.901 +++ CONFIG_CRYPTO_MLX5=n 00:05:25.901 +++ CONFIG_MAX_LCORES= 00:05:25.901 +++ CONFIG_IPSEC_MB=n 00:05:25.901 +++ CONFIG_DEBUG=y 00:05:25.901 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:25.901 +++ CONFIG_CROSS_PREFIX= 00:05:25.901 +++ CONFIG_URING=n 00:05:25.901 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:25.901 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:25.901 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:25.901 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:25.901 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:25.901 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:25.901 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:25.901 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:25.901 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:25.901 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:25.901 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:25.901 +++ VHOST_APP=("$_app_dir/vhost") 00:05:25.901 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:25.901 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:25.901 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:25.901 +++ [[ #ifndef SPDK_CONFIG_H 00:05:25.901 #define SPDK_CONFIG_H 00:05:25.901 #define SPDK_CONFIG_APPS 1 00:05:25.901 #define SPDK_CONFIG_ARCH native 00:05:25.901 #define SPDK_CONFIG_ASAN 1 00:05:25.901 #undef SPDK_CONFIG_AVAHI 00:05:25.901 #undef SPDK_CONFIG_CET 00:05:25.901 #define SPDK_CONFIG_COVERAGE 1 00:05:25.901 #define SPDK_CONFIG_CROSS_PREFIX 00:05:25.901 #undef SPDK_CONFIG_CRYPTO 00:05:25.901 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:25.901 #undef SPDK_CONFIG_CUSTOMOCF 00:05:25.901 #undef SPDK_CONFIG_DAOS 00:05:25.901 #define SPDK_CONFIG_DAOS_DIR 00:05:25.901 #define SPDK_CONFIG_DEBUG 1 00:05:25.901 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:25.901 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:05:25.901 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:05:25.901 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:05:25.901 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:25.902 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:25.902 #define SPDK_CONFIG_EXAMPLES 1 00:05:25.902 #undef SPDK_CONFIG_FC 00:05:25.902 #define SPDK_CONFIG_FC_PATH 00:05:25.902 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:25.902 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:25.902 #undef SPDK_CONFIG_FUSE 00:05:25.902 #undef SPDK_CONFIG_FUZZER 00:05:25.902 #define SPDK_CONFIG_FUZZER_LIB 00:05:25.902 #undef SPDK_CONFIG_GOLANG 00:05:25.902 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:25.902 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:25.902 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:25.902 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:25.902 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:25.902 #define SPDK_CONFIG_IDXD 1 00:05:25.902 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:25.902 #undef SPDK_CONFIG_IPSEC_MB 00:05:25.902 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:25.902 #define SPDK_CONFIG_ISAL 1 00:05:25.902 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:25.902 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:25.902 #define SPDK_CONFIG_LIBDIR 00:05:25.902 #undef SPDK_CONFIG_LTO 00:05:25.902 #define SPDK_CONFIG_MAX_LCORES 00:05:25.902 #define SPDK_CONFIG_NVME_CUSE 1 00:05:25.902 #undef SPDK_CONFIG_OCF 00:05:25.902 #define SPDK_CONFIG_OCF_PATH 00:05:25.902 #define SPDK_CONFIG_OPENSSL_PATH 00:05:25.902 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:25.902 #undef SPDK_CONFIG_PGO_USE 00:05:25.902 #define SPDK_CONFIG_PREFIX /usr/local 00:05:25.902 #define SPDK_CONFIG_RAID5F 1 00:05:25.902 #undef SPDK_CONFIG_RBD 00:05:25.902 #define SPDK_CONFIG_RDMA 1 00:05:25.902 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:25.902 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:25.902 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:25.902 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:25.902 #undef SPDK_CONFIG_SHARED 00:05:25.902 #undef SPDK_CONFIG_SMA 00:05:25.902 #define SPDK_CONFIG_TESTS 1 00:05:25.902 #undef SPDK_CONFIG_TSAN 00:05:25.902 #undef SPDK_CONFIG_UBLK 00:05:25.902 #define SPDK_CONFIG_UBSAN 1 00:05:25.902 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:25.902 #undef SPDK_CONFIG_URING 00:05:25.902 #define SPDK_CONFIG_URING_PATH 00:05:25.902 #undef SPDK_CONFIG_URING_ZNS 00:05:25.902 #undef SPDK_CONFIG_USDT 00:05:25.902 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:25.902 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:25.902 #undef SPDK_CONFIG_VFIO_USER 00:05:25.902 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:25.902 #define SPDK_CONFIG_VHOST 1 00:05:25.902 #define SPDK_CONFIG_VIRTIO 1 00:05:25.902 #undef SPDK_CONFIG_VTUNE 00:05:25.902 #define SPDK_CONFIG_VTUNE_DIR 00:05:25.902 #define SPDK_CONFIG_WERROR 1 00:05:25.902 #define SPDK_CONFIG_WPDK_DIR 00:05:25.902 #undef SPDK_CONFIG_XNVME 00:05:25.902 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:25.902 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:25.902 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:25.902 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:25.902 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.902 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.902 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:25.902 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:25.902 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:25.902 ++++ export PATH 00:05:25.902 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:25.902 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:25.902 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:25.902 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:25.902 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:25.902 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:25.902 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:25.902 +++ TEST_TAG=N/A 00:05:25.902 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:25.902 ++ : 1 00:05:25.902 ++ export RUN_NIGHTLY 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_RUN_VALGRIND 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_TEST_UNITTEST 00:05:25.902 ++ : 00:05:25.902 ++ export SPDK_TEST_AUTOBUILD 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_RELEASE_BUILD 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_ISAL 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_ISCSI 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_TEST_NVME 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVME_PMR 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVME_BP 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVME_CLI 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVME_CUSE 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVME_FDP 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVMF 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_VFIOUSER 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_FUZZER 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_FUZZER_SHORT 00:05:25.902 ++ : rdma 00:05:25.902 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_RBD 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_VHOST 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_TEST_BLOCKDEV 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_IOAT 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_BLOBFS 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_VHOST_INIT 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_LVOL 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_RUN_ASAN 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_RUN_UBSAN 00:05:25.902 ++ : /home/vagrant/spdk_repo/dpdk/build 00:05:25.902 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_RUN_NON_ROOT 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_CRYPTO 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_FTL 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_OCF 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_VMD 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_OPAL 00:05:25.902 ++ : v22.11.4 00:05:25.902 ++ export SPDK_TEST_NATIVE_DPDK 00:05:25.902 ++ : true 00:05:25.902 ++ export SPDK_AUTOTEST_X 00:05:25.902 ++ : 1 00:05:25.902 ++ export SPDK_TEST_RAID5 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_URING 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_USDT 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_USE_IGB_UIO 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_SCHEDULER 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_SCANBUILD 00:05:25.902 ++ : 00:05:25.902 ++ export SPDK_TEST_NVMF_NICS 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_SMA 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_DAOS 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_XNVME 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_ACCEL_DSA 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_ACCEL_IAA 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_ACCEL_IOAT 00:05:25.902 ++ : 00:05:25.902 ++ export SPDK_TEST_FUZZER_TARGET 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_TEST_NVMF_MDNS 00:05:25.902 ++ : 0 00:05:25.902 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:25.902 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:25.902 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:25.902 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:25.902 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:25.902 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:25.902 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:25.902 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:25.902 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:25.902 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:25.902 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:25.902 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:25.902 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:25.902 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:25.902 ++ PYTHONDONTWRITEBYTECODE=1 00:05:25.903 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:25.903 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:25.903 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:25.903 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:25.903 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:25.903 ++ rm -rf /var/tmp/asan_suppression_file 00:05:25.903 ++ cat 00:05:25.903 ++ echo leak:libfuse3.so 00:05:25.903 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:25.903 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:25.903 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:25.903 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:25.903 ++ '[' -z /var/spdk/dependencies ']' 00:05:25.903 ++ export DEPENDENCY_DIR 00:05:25.903 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:25.903 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:25.903 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:25.903 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:25.903 ++ export QEMU_BIN= 00:05:25.903 ++ QEMU_BIN= 00:05:25.903 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:25.903 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:25.903 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:25.903 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:25.903 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:25.903 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:25.903 ++ _LCOV_MAIN=0 00:05:25.903 ++ _LCOV_LLVM=1 00:05:25.903 ++ _LCOV= 00:05:25.903 ++ [[ '' == *clang* ]] 00:05:25.903 ++ [[ 0 -eq 1 ]] 00:05:25.903 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:05:25.903 ++ _lcov_opt[_LCOV_MAIN]= 00:05:25.903 ++ lcov_opt= 00:05:25.903 ++ '[' 0 -eq 0 ']' 00:05:25.903 ++ export valgrind= 00:05:25.903 ++ valgrind= 00:05:25.903 +++ uname -s 00:05:25.903 ++ '[' Linux = Linux ']' 00:05:25.903 ++ HUGEMEM=4096 00:05:25.903 ++ export CLEAR_HUGE=yes 00:05:25.903 ++ CLEAR_HUGE=yes 00:05:25.903 ++ [[ 0 -eq 1 ]] 00:05:25.903 ++ [[ 0 -eq 1 ]] 00:05:25.903 ++ MAKE=make 00:05:25.903 +++ nproc 00:05:25.903 ++ MAKEFLAGS=-j10 00:05:25.903 ++ export HUGEMEM=4096 00:05:25.903 ++ HUGEMEM=4096 00:05:25.903 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:25.903 ++ NO_HUGE=() 00:05:25.903 ++ TEST_MODE= 00:05:25.903 ++ [[ -z '' ]] 00:05:25.903 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:25.903 ++ exec 00:05:25.903 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:25.903 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:25.903 ++ set_test_storage 2147483648 00:05:25.903 ++ [[ -v testdir ]] 00:05:25.903 ++ local requested_size=2147483648 00:05:25.903 ++ local mount target_dir 00:05:25.903 ++ local -A mounts fss sizes avails uses 00:05:25.903 ++ local source fs size avail mount use 00:05:25.903 ++ local storage_fallback storage_candidates 00:05:25.903 +++ mktemp -udt spdk.XXXXXX 00:05:25.903 ++ storage_fallback=/tmp/spdk.iGz0cK 00:05:25.903 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:25.903 ++ [[ -n '' ]] 00:05:25.903 ++ [[ -n '' ]] 00:05:25.903 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.iGz0cK/tests/unit /tmp/spdk.iGz0cK 00:05:25.903 ++ requested_size=2214592512 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 +++ df -T 00:05:25.903 +++ grep -v Filesystem 00:05:25.903 ++ mounts["$mount"]=tmpfs 00:05:25.903 ++ fss["$mount"]=tmpfs 00:05:25.903 ++ avails["$mount"]=1252601856 00:05:25.903 ++ sizes["$mount"]=1253683200 00:05:25.903 ++ uses["$mount"]=1081344 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ mounts["$mount"]=/dev/vda1 00:05:25.903 ++ fss["$mount"]=ext4 00:05:25.903 ++ avails["$mount"]=9651216384 00:05:25.903 ++ sizes["$mount"]=20616794112 00:05:25.903 ++ uses["$mount"]=10948800512 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ mounts["$mount"]=tmpfs 00:05:25.903 ++ fss["$mount"]=tmpfs 00:05:25.903 ++ avails["$mount"]=6268399616 00:05:25.903 ++ sizes["$mount"]=6268399616 00:05:25.903 ++ uses["$mount"]=0 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ mounts["$mount"]=tmpfs 00:05:25.903 ++ fss["$mount"]=tmpfs 00:05:25.903 ++ avails["$mount"]=5242880 00:05:25.903 ++ sizes["$mount"]=5242880 00:05:25.903 ++ uses["$mount"]=0 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ mounts["$mount"]=/dev/vda15 00:05:25.903 ++ fss["$mount"]=vfat 00:05:25.903 ++ avails["$mount"]=103061504 00:05:25.903 ++ sizes["$mount"]=109395968 00:05:25.903 ++ uses["$mount"]=6334464 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ mounts["$mount"]=tmpfs 00:05:25.903 ++ fss["$mount"]=tmpfs 00:05:25.903 ++ avails["$mount"]=1253675008 00:05:25.903 ++ sizes["$mount"]=1253679104 00:05:25.903 ++ uses["$mount"]=4096 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:05:25.903 ++ fss["$mount"]=fuse.sshfs 00:05:25.903 ++ avails["$mount"]=97209196544 00:05:25.903 ++ sizes["$mount"]=105088212992 00:05:25.903 ++ uses["$mount"]=2493583360 00:05:25.903 ++ read -r source fs size use avail _ mount 00:05:25.903 ++ printf '* Looking for test storage...\n' 00:05:25.903 * Looking for test storage... 00:05:25.903 ++ local target_space new_size 00:05:25.903 ++ for target_dir in "${storage_candidates[@]}" 00:05:25.903 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:25.903 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:25.903 ++ mount=/ 00:05:25.903 ++ target_space=9651216384 00:05:25.903 ++ (( target_space == 0 || target_space < requested_size )) 00:05:25.903 ++ (( target_space >= requested_size )) 00:05:25.903 ++ [[ ext4 == tmpfs ]] 00:05:25.903 ++ [[ ext4 == ramfs ]] 00:05:25.903 ++ [[ / == / ]] 00:05:25.903 ++ new_size=13163393024 00:05:25.903 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:25.903 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:25.903 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:25.903 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:25.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:25.903 ++ return 0 00:05:25.903 ++ set -o errtrace 00:05:25.903 ++ shopt -s extdebug 00:05:25.903 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:25.903 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:25.903 14:07:17 -- common/autotest_common.sh@1682 -- # true 00:05:25.903 14:07:17 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:05:25.903 14:07:17 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:25.903 14:07:17 -- common/autotest_common.sh@29 -- # exec 00:05:25.903 14:07:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:25.903 14:07:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:25.903 14:07:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:25.903 14:07:17 -- common/autotest_common.sh@18 -- # set -x 00:05:25.903 14:07:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.903 14:07:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.903 14:07:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.903 14:07:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.903 14:07:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.903 14:07:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.903 14:07:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.903 14:07:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.903 14:07:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.903 14:07:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.903 14:07:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.903 14:07:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.903 14:07:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.903 14:07:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.903 14:07:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.903 14:07:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.903 14:07:17 -- scripts/common.sh@344 -- # : 1 00:05:25.903 14:07:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.903 14:07:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.903 14:07:17 -- scripts/common.sh@364 -- # decimal 1 00:05:25.903 14:07:17 -- scripts/common.sh@352 -- # local d=1 00:05:25.903 14:07:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.903 14:07:17 -- scripts/common.sh@354 -- # echo 1 00:05:25.903 14:07:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.903 14:07:17 -- scripts/common.sh@365 -- # decimal 2 00:05:25.903 14:07:17 -- scripts/common.sh@352 -- # local d=2 00:05:25.903 14:07:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.903 14:07:17 -- scripts/common.sh@354 -- # echo 2 00:05:25.903 14:07:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.903 14:07:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.903 14:07:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.903 14:07:17 -- scripts/common.sh@367 -- # return 0 00:05:25.903 14:07:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.903 14:07:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.903 --rc genhtml_branch_coverage=1 00:05:25.903 --rc genhtml_function_coverage=1 00:05:25.903 --rc genhtml_legend=1 00:05:25.903 --rc geninfo_all_blocks=1 00:05:25.903 --rc geninfo_unexecuted_blocks=1 00:05:25.903 00:05:25.903 ' 00:05:25.903 14:07:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.904 --rc genhtml_branch_coverage=1 00:05:25.904 --rc genhtml_function_coverage=1 00:05:25.904 --rc genhtml_legend=1 00:05:25.904 --rc geninfo_all_blocks=1 00:05:25.904 --rc geninfo_unexecuted_blocks=1 00:05:25.904 00:05:25.904 ' 00:05:25.904 14:07:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.904 --rc genhtml_branch_coverage=1 00:05:25.904 --rc genhtml_function_coverage=1 00:05:25.904 --rc genhtml_legend=1 00:05:25.904 --rc geninfo_all_blocks=1 00:05:25.904 --rc geninfo_unexecuted_blocks=1 00:05:25.904 00:05:25.904 ' 00:05:25.904 14:07:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.904 --rc genhtml_branch_coverage=1 00:05:25.904 --rc genhtml_function_coverage=1 00:05:25.904 --rc genhtml_legend=1 00:05:25.904 --rc geninfo_all_blocks=1 00:05:25.904 --rc geninfo_unexecuted_blocks=1 00:05:25.904 00:05:25.904 ' 00:05:25.904 14:07:17 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:25.904 14:07:17 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:25.904 14:07:17 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:25.904 14:07:17 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:25.904 14:07:17 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:05:25.904 14:07:17 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:25.904 14:07:17 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:25.904 14:07:17 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:40.794 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:40.794 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:40.794 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:40.794 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:40.794 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:40.794 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:07.420 14:07:56 -- unit/unittest.sh@182 -- # uname -m 00:06:07.420 14:07:56 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:06:07.420 14:07:56 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:07.420 14:07:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.420 14:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.420 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.420 ************************************ 00:06:07.420 START TEST unittest_pci_event 00:06:07.420 ************************************ 00:06:07.420 14:07:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:07.420 00:06:07.420 00:06:07.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.420 http://cunit.sourceforge.net/ 00:06:07.420 00:06:07.420 00:06:07.420 Suite: pci_event 00:06:07.420 Test: test_pci_parse_event ...[2024-11-18 14:07:56.712994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:07.420 passed 00:06:07.420 00:06:07.420 [2024-11-18 14:07:56.713829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:07.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.420 suites 1 1 n/a 0 0 00:06:07.420 tests 1 1 1 0 0 00:06:07.420 asserts 15 15 15 0 n/a 00:06:07.420 00:06:07.420 Elapsed time = 0.001 seconds 00:06:07.420 00:06:07.420 real 0m0.031s 00:06:07.420 user 0m0.021s 00:06:07.420 sys 0m0.008s 00:06:07.420 14:07:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.420 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.420 ************************************ 00:06:07.420 END TEST unittest_pci_event 00:06:07.420 ************************************ 00:06:07.420 14:07:56 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:07.420 14:07:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.420 14:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.420 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.420 ************************************ 00:06:07.420 START TEST unittest_include 00:06:07.420 ************************************ 00:06:07.420 14:07:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:07.420 00:06:07.420 00:06:07.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.420 http://cunit.sourceforge.net/ 00:06:07.420 00:06:07.420 00:06:07.420 Suite: histogram 00:06:07.420 Test: histogram_test ...passed 00:06:07.420 Test: histogram_merge ...passed 00:06:07.420 00:06:07.420 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.420 suites 1 1 n/a 0 0 00:06:07.420 tests 2 2 2 0 0 00:06:07.420 asserts 50 50 50 0 n/a 00:06:07.420 00:06:07.420 Elapsed time = 0.006 seconds 00:06:07.420 00:06:07.420 real 0m0.040s 00:06:07.420 user 0m0.032s 00:06:07.420 sys 0m0.008s 00:06:07.420 14:07:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.420 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.420 ************************************ 00:06:07.420 END TEST unittest_include 00:06:07.420 ************************************ 00:06:07.420 14:07:56 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:06:07.420 14:07:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.420 14:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.420 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.420 ************************************ 00:06:07.420 START TEST unittest_bdev 00:06:07.420 ************************************ 00:06:07.420 14:07:56 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:06:07.420 14:07:56 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:07.420 00:06:07.420 00:06:07.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.420 http://cunit.sourceforge.net/ 00:06:07.420 00:06:07.420 00:06:07.420 Suite: bdev 00:06:07.420 Test: bytes_to_blocks_test ...passed 00:06:07.420 Test: num_blocks_test ...passed 00:06:07.420 Test: io_valid_test ...passed 00:06:07.420 Test: open_write_test ...[2024-11-18 14:07:56.950384] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:07.420 [2024-11-18 14:07:56.950634] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:07.420 [2024-11-18 14:07:56.950724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:07.420 passed 00:06:07.420 Test: claim_test ...passed 00:06:07.420 Test: alias_add_del_test ...[2024-11-18 14:07:57.011307] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:07.420 [2024-11-18 14:07:57.011411] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:07.420 [2024-11-18 14:07:57.011455] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:07.420 passed 00:06:07.420 Test: get_device_stat_test ...passed 00:06:07.420 Test: bdev_io_types_test ...passed 00:06:07.420 Test: bdev_io_wait_test ...passed 00:06:07.420 Test: bdev_io_spans_split_test ...passed 00:06:07.420 Test: bdev_io_boundary_split_test ...passed 00:06:07.420 Test: bdev_io_max_size_and_segment_split_test ...[2024-11-18 14:07:57.129488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:07.420 passed 00:06:07.420 Test: bdev_io_mix_split_test ...passed 00:06:07.420 Test: bdev_io_split_with_io_wait ...passed 00:06:07.420 Test: bdev_io_write_unit_split_test ...[2024-11-18 14:07:57.217202] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:07.421 [2024-11-18 14:07:57.217291] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:07.421 [2024-11-18 14:07:57.217326] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:07.421 [2024-11-18 14:07:57.217363] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:07.421 passed 00:06:07.421 Test: bdev_io_alignment_with_boundary ...passed 00:06:07.421 Test: bdev_io_alignment ...passed 00:06:07.421 Test: bdev_histograms ...passed 00:06:07.421 Test: bdev_write_zeroes ...passed 00:06:07.421 Test: bdev_compare_and_write ...passed 00:06:07.421 Test: bdev_compare ...passed 00:06:07.421 Test: bdev_compare_emulated ...passed 00:06:07.421 Test: bdev_zcopy_write ...passed 00:06:07.421 Test: bdev_zcopy_read ...passed 00:06:07.421 Test: bdev_open_while_hotremove ...passed 00:06:07.421 Test: bdev_close_while_hotremove ...passed 00:06:07.421 Test: bdev_open_ext_test ...passed 00:06:07.421 Test: bdev_open_ext_unregister ...[2024-11-18 14:07:57.554437] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:07.421 [2024-11-18 14:07:57.554647] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:07.421 passed 00:06:07.421 Test: bdev_set_io_timeout ...passed 00:06:07.421 Test: bdev_set_qd_sampling ...passed 00:06:07.421 Test: lba_range_overlap ...passed 00:06:07.421 Test: lock_lba_range_check_ranges ...passed 00:06:07.421 Test: lock_lba_range_with_io_outstanding ...passed 00:06:07.421 Test: lock_lba_range_overlapped ...passed 00:06:07.421 Test: bdev_quiesce ...[2024-11-18 14:07:57.713276] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:07.421 passed 00:06:07.421 Test: bdev_io_abort ...passed 00:06:07.421 Test: bdev_unmap ...passed 00:06:07.421 Test: bdev_write_zeroes_split_test ...passed 00:06:07.421 Test: bdev_set_options_test ...passed 00:06:07.421 Test: bdev_get_memory_domains ...passed 00:06:07.421 Test: bdev_io_ext ...[2024-11-18 14:07:57.814265] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:07.421 passed 00:06:07.421 Test: bdev_io_ext_no_opts ...passed 00:06:07.421 Test: bdev_io_ext_invalid_opts ...passed 00:06:07.421 Test: bdev_io_ext_split ...passed 00:06:07.421 Test: bdev_io_ext_bounce_buffer ...passed 00:06:07.421 Test: bdev_register_uuid_alias ...[2024-11-18 14:07:57.976898] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name f6b187c6-a74b-4dd5-8a73-1fe249546051 already exists 00:06:07.421 [2024-11-18 14:07:57.976964] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:f6b187c6-a74b-4dd5-8a73-1fe249546051 alias for bdev bdev0 00:06:07.421 passed 00:06:07.421 Test: bdev_unregister_by_name ...[2024-11-18 14:07:57.992724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:07.421 passed 00:06:07.421 Test: for_each_bdev_test ...[2024-11-18 14:07:57.992776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:07.421 passed 00:06:07.421 Test: bdev_seek_test ...passed 00:06:07.421 Test: bdev_copy ...passed 00:06:07.421 Test: bdev_copy_split_test ...passed 00:06:07.421 Test: examine_locks ...passed 00:06:07.421 Test: claim_v2_rwo ...[2024-11-18 14:07:58.080941] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081013] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081033] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081085] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081101] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:07.421 passed 00:06:07.421 Test: claim_v2_rom ...[2024-11-18 14:07:58.081141] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:07.421 [2024-11-18 14:07:58.081282] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:07.421 passed 00:06:07.421 Test: claim_v2_rwm ...[2024-11-18 14:07:58.081337] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081358] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081379] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081436] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:07.421 [2024-11-18 14:07:58.081473] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:07.421 [2024-11-18 14:07:58.081627] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:07.421 [2024-11-18 14:07:58.081693] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081720] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081743] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081765] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081789] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.081831] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:07.421 passed 00:06:07.421 Test: claim_v2_existing_writer ...[2024-11-18 14:07:58.081983] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:07.421 [2024-11-18 14:07:58.082013] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:07.421 passed 00:06:07.421 Test: claim_v2_existing_v1 ...[2024-11-18 14:07:58.082138] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:07.421 passed 00:06:07.421 Test: claim_v1_existing_v2 ...[2024-11-18 14:07:58.082185] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.082204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.082347] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.082396] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:07.421 [2024-11-18 14:07:58.082428] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:07.421 passed 00:06:07.421 Test: examine_claimed ...[2024-11-18 14:07:58.082670] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:07.421 passed 00:06:07.421 00:06:07.421 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.421 suites 1 1 n/a 0 0 00:06:07.421 tests 59 59 59 0 0 00:06:07.421 asserts 4599 4599 4599 0 n/a 00:06:07.421 00:06:07.421 Elapsed time = 1.187 seconds 00:06:07.421 14:07:58 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:07.421 00:06:07.421 00:06:07.421 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.421 http://cunit.sourceforge.net/ 00:06:07.421 00:06:07.421 00:06:07.421 Suite: nvme 00:06:07.421 Test: test_create_ctrlr ...passed 00:06:07.421 Test: test_reset_ctrlr ...[2024-11-18 14:07:58.136712] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.421 passed 00:06:07.421 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:07.421 Test: test_failover_ctrlr ...passed 00:06:07.421 Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-18 14:07:58.139569] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.421 [2024-11-18 14:07:58.139797] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.421 [2024-11-18 14:07:58.140054] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.421 passed 00:06:07.422 Test: test_pending_reset ...[2024-11-18 14:07:58.141611] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.141940] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_attach_ctrlr ...[2024-11-18 14:07:58.143117] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:07.422 passed 00:06:07.422 Test: test_aer_cb ...passed 00:06:07.422 Test: test_submit_nvme_cmd ...passed 00:06:07.422 Test: test_add_remove_trid ...passed 00:06:07.422 Test: test_abort ...[2024-11-18 14:07:58.146575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:07.422 passed 00:06:07.422 Test: test_get_io_qpair ...passed 00:06:07.422 Test: test_bdev_unregister ...passed 00:06:07.422 Test: test_compare_ns ...passed 00:06:07.422 Test: test_init_ana_log_page ...passed 00:06:07.422 Test: test_get_memory_domains ...passed 00:06:07.422 Test: test_reconnect_qpair ...[2024-11-18 14:07:58.149514] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_create_bdev_ctrlr ...[2024-11-18 14:07:58.150066] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:07.422 passed 00:06:07.422 Test: test_add_multi_ns_to_bdev ...[2024-11-18 14:07:58.151498] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:07.422 passed 00:06:07.422 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:07.422 Test: test_admin_path ...passed 00:06:07.422 Test: test_reset_bdev_ctrlr ...passed 00:06:07.422 Test: test_find_io_path ...passed 00:06:07.422 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:07.422 Test: test_retry_io_for_io_path_error ...passed 00:06:07.422 Test: test_retry_io_count ...passed 00:06:07.422 Test: test_concurrent_read_ana_log_page ...passed 00:06:07.422 Test: test_retry_io_for_ana_error ...passed 00:06:07.422 Test: test_check_io_error_resiliency_params ...[2024-11-18 14:07:58.158540] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:07.422 [2024-11-18 14:07:58.158622] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:07.422 [2024-11-18 14:07:58.158659] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:07.422 [2024-11-18 14:07:58.158697] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:07.422 [2024-11-18 14:07:58.158728] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:07.422 [2024-11-18 14:07:58.158760] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:07.422 [2024-11-18 14:07:58.158782] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:07.422 passed 00:06:07.422 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-18 14:07:58.158832] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:07.422 [2024-11-18 14:07:58.158861] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:07.422 passed 00:06:07.422 Test: test_reconnect_ctrlr ...[2024-11-18 14:07:58.159754] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.159899] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.160174] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.160369] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.160588] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_retry_failover_ctrlr ...[2024-11-18 14:07:58.161005] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_fail_path ...[2024-11-18 14:07:58.161581] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.161757] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.161928] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.162052] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.162241] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_nvme_ns_cmp ...passed 00:06:07.422 Test: test_ana_transition ...passed 00:06:07.422 Test: test_set_preferred_path ...passed 00:06:07.422 Test: test_find_next_io_path ...passed 00:06:07.422 Test: test_find_io_path_min_qd ...passed 00:06:07.422 Test: test_disable_auto_failback ...[2024-11-18 14:07:58.163904] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_set_multipath_policy ...passed 00:06:07.422 Test: test_uuid_generation ...passed 00:06:07.422 Test: test_retry_io_to_same_path ...passed 00:06:07.422 Test: test_race_between_reset_and_disconnected ...passed 00:06:07.422 Test: test_ctrlr_op_rpc ...passed 00:06:07.422 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:07.422 Test: test_disable_enable_ctrlr ...[2024-11-18 14:07:58.167610] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 [2024-11-18 14:07:58.167806] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:07.422 passed 00:06:07.422 Test: test_delete_ctrlr_done ...passed 00:06:07.422 Test: test_ns_remove_during_reset ...passed 00:06:07.422 00:06:07.422 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.422 suites 1 1 n/a 0 0 00:06:07.422 tests 48 48 48 0 0 00:06:07.422 asserts 3553 3553 3553 0 n/a 00:06:07.422 00:06:07.422 Elapsed time = 0.033 seconds 00:06:07.422 14:07:58 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:07.422 Test Options 00:06:07.422 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:07.422 00:06:07.422 00:06:07.422 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.422 http://cunit.sourceforge.net/ 00:06:07.422 00:06:07.422 00:06:07.422 Suite: raid 00:06:07.422 Test: test_create_raid ...passed 00:06:07.422 Test: test_create_raid_superblock ...passed 00:06:07.422 Test: test_delete_raid ...passed 00:06:07.422 Test: test_create_raid_invalid_args ...[2024-11-18 14:07:58.211351] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:07.422 [2024-11-18 14:07:58.211907] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:07.422 [2024-11-18 14:07:58.212534] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:07.422 [2024-11-18 14:07:58.212883] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:07.422 [2024-11-18 14:07:58.213815] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:07.422 passed 00:06:07.422 Test: test_delete_raid_invalid_args ...passed 00:06:07.422 Test: test_io_channel ...passed 00:06:07.422 Test: test_reset_io ...passed 00:06:07.422 Test: test_write_io ...passed 00:06:07.422 Test: test_read_io ...passed 00:06:07.422 Test: test_unmap_io ...passed 00:06:07.422 Test: test_io_failure ...[2024-11-18 14:07:59.073065] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:07.422 passed 00:06:07.422 Test: test_multi_raid_no_io ...passed 00:06:07.422 Test: test_multi_raid_with_io ...passed 00:06:07.422 Test: test_io_type_supported ...passed 00:06:07.422 Test: test_raid_json_dump_info ...passed 00:06:07.422 Test: test_context_size ...passed 00:06:07.422 Test: test_raid_level_conversions ...passed 00:06:07.422 Test: test_raid_process ...passed 00:06:07.422 Test: test_raid_io_split ...passed 00:06:07.422 00:06:07.422 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 19 19 19 0 0 00:06:07.423 asserts 177879 177879 177879 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.873 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: raid_sb 00:06:07.423 Test: test_raid_bdev_write_superblock ...passed 00:06:07.423 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:07.423 Test: test_raid_bdev_parse_superblock ...[2024-11-18 14:07:59.119971] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:07.423 passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 3 3 3 0 0 00:06:07.423 asserts 32 32 32 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.001 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: concat 00:06:07.423 Test: test_concat_start ...passed 00:06:07.423 Test: test_concat_rw ...passed 00:06:07.423 Test: test_concat_null_payload ...passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 3 3 3 0 0 00:06:07.423 asserts 8097 8097 8097 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.007 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: raid1 00:06:07.423 Test: test_raid1_start ...passed 00:06:07.423 Test: test_raid1_read_balancing ...passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 2 2 2 0 0 00:06:07.423 asserts 2856 2856 2856 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.003 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: zone 00:06:07.423 Test: test_zone_get_operation ...passed 00:06:07.423 Test: test_bdev_zone_get_info ...passed 00:06:07.423 Test: test_bdev_zone_management ...passed 00:06:07.423 Test: test_bdev_zone_append ...passed 00:06:07.423 Test: test_bdev_zone_append_with_md ...passed 00:06:07.423 Test: test_bdev_zone_appendv ...passed 00:06:07.423 Test: test_bdev_zone_appendv_with_md ...passed 00:06:07.423 Test: test_bdev_io_get_append_location ...passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 8 8 8 0 0 00:06:07.423 asserts 94 94 94 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.000 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: gpt_parse 00:06:07.423 Test: test_parse_mbr_and_primary ...[2024-11-18 14:07:59.249043] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.423 [2024-11-18 14:07:59.249554] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.423 [2024-11-18 14:07:59.249683] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:07.423 [2024-11-18 14:07:59.249886] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:07.423 [2024-11-18 14:07:59.249986] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:07.423 [2024-11-18 14:07:59.250119] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:07.423 passed 00:06:07.423 Test: test_parse_secondary ...[2024-11-18 14:07:59.250986] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:07.423 [2024-11-18 14:07:59.251051] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:07.423 [2024-11-18 14:07:59.251108] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:07.423 [2024-11-18 14:07:59.251149] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:07.423 passed 00:06:07.423 Test: test_check_mbr ...[2024-11-18 14:07:59.252016] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.423 [2024-11-18 14:07:59.252074] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:07.423 passed 00:06:07.423 Test: test_read_header ...[2024-11-18 14:07:59.252152] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:07.423 [2024-11-18 14:07:59.252251] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:07.423 [2024-11-18 14:07:59.252351] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:07.423 [2024-11-18 14:07:59.252401] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:07.423 passed 00:06:07.423 Test: test_read_partitions ...[2024-11-18 14:07:59.252447] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:07.423 [2024-11-18 14:07:59.252488] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:07.423 [2024-11-18 14:07:59.252549] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:07.423 [2024-11-18 14:07:59.252603] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:07.423 [2024-11-18 14:07:59.252642] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:07.423 [2024-11-18 14:07:59.252676] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:07.423 [2024-11-18 14:07:59.253065] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:07.423 passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 5 5 5 0 0 00:06:07.423 asserts 33 33 33 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.005 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:07.423 00:06:07.423 00:06:07.423 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.423 http://cunit.sourceforge.net/ 00:06:07.423 00:06:07.423 00:06:07.423 Suite: bdev_part 00:06:07.423 Test: part_test ...[2024-11-18 14:07:59.291027] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:07.423 passed 00:06:07.423 Test: part_free_test ...passed 00:06:07.423 Test: part_get_io_channel_test ...passed 00:06:07.423 Test: part_construct_ext ...passed 00:06:07.423 00:06:07.423 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.423 suites 1 1 n/a 0 0 00:06:07.423 tests 4 4 4 0 0 00:06:07.423 asserts 48 48 48 0 n/a 00:06:07.423 00:06:07.423 Elapsed time = 0.052 seconds 00:06:07.423 14:07:59 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:07.423 00:06:07.423 00:06:07.424 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.424 http://cunit.sourceforge.net/ 00:06:07.424 00:06:07.424 00:06:07.424 Suite: scsi_nvme_suite 00:06:07.424 Test: scsi_nvme_translate_test ...passed 00:06:07.424 00:06:07.424 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.424 suites 1 1 n/a 0 0 00:06:07.424 tests 1 1 1 0 0 00:06:07.424 asserts 104 104 104 0 n/a 00:06:07.424 00:06:07.424 Elapsed time = 0.000 seconds 00:06:07.424 14:07:59 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:07.424 00:06:07.424 00:06:07.424 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.424 http://cunit.sourceforge.net/ 00:06:07.424 00:06:07.424 00:06:07.424 Suite: lvol 00:06:07.424 Test: ut_lvs_init ...[2024-11-18 14:07:59.407469] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:07.424 [2024-11-18 14:07:59.407766] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:07.424 passed 00:06:07.424 Test: ut_lvol_init ...passed 00:06:07.424 Test: ut_lvol_snapshot ...passed 00:06:07.424 Test: ut_lvol_clone ...passed 00:06:07.424 Test: ut_lvs_destroy ...passed 00:06:07.424 Test: ut_lvs_unload ...passed 00:06:07.424 Test: ut_lvol_resize ...[2024-11-18 14:07:59.408982] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:07.424 passed 00:06:07.424 Test: ut_lvol_set_read_only ...passed 00:06:07.424 Test: ut_lvol_hotremove ...passed 00:06:07.424 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:07.424 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:07.424 Test: ut_lvol_read_write ...passed 00:06:07.424 Test: ut_vbdev_lvol_submit_request ...passed 00:06:07.424 Test: ut_lvol_examine_config ...passed 00:06:07.424 Test: ut_lvol_examine_disk ...[2024-11-18 14:07:59.409702] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:07.424 passed 00:06:07.424 Test: ut_lvol_rename ...[2024-11-18 14:07:59.410589] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:07.424 [2024-11-18 14:07:59.410676] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:07.424 passed 00:06:07.424 Test: ut_bdev_finish ...passed 00:06:07.424 Test: ut_lvs_rename ...passed 00:06:07.424 Test: ut_lvol_seek ...passed 00:06:07.424 Test: ut_esnap_dev_create ...[2024-11-18 14:07:59.411338] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:07.424 [2024-11-18 14:07:59.411414] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:07.424 [2024-11-18 14:07:59.411446] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:07.424 [2024-11-18 14:07:59.411504] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:07.424 passed 00:06:07.424 Test: ut_lvol_esnap_clone_bad_args ...[2024-11-18 14:07:59.411667] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:07.424 [2024-11-18 14:07:59.411709] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:07.424 passed 00:06:07.424 00:06:07.424 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.424 suites 1 1 n/a 0 0 00:06:07.424 tests 21 21 21 0 0 00:06:07.424 asserts 712 712 712 0 n/a 00:06:07.424 00:06:07.424 Elapsed time = 0.005 seconds 00:06:07.424 14:07:59 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:07.424 00:06:07.424 00:06:07.424 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.424 http://cunit.sourceforge.net/ 00:06:07.424 00:06:07.424 00:06:07.424 Suite: zone_block 00:06:07.424 Test: test_zone_block_create ...passed 00:06:07.424 Test: test_zone_block_create_invalid ...[2024-11-18 14:07:59.472213] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:07.424 [2024-11-18 14:07:59.472573] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 14:07:59.472773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:07.424 [2024-11-18 14:07:59.472864] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 14:07:59.473076] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:07.424 [2024-11-18 14:07:59.473149] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-18 14:07:59.473256] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:07.424 [2024-11-18 14:07:59.473324] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:07.684 Test: test_get_zone_info ...[2024-11-18 14:07:59.473907] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.473998] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.474065] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 Test: test_supported_io_types ...passed 00:06:07.684 Test: test_reset_zone ...[2024-11-18 14:07:59.474933] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.475018] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 Test: test_open_zone ...[2024-11-18 14:07:59.475515] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.476247] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.476328] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 Test: test_zone_write ...[2024-11-18 14:07:59.476861] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:07.684 [2024-11-18 14:07:59.476937] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.477014] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:07.684 [2024-11-18 14:07:59.477085] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.483117] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:07.684 [2024-11-18 14:07:59.483217] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.483309] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:07.684 [2024-11-18 14:07:59.483343] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.489813] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:07.684 passed 00:06:07.684 Test: test_zone_read ...[2024-11-18 14:07:59.489923] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.490396] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:07.684 [2024-11-18 14:07:59.490462] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.490577] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:07.684 [2024-11-18 14:07:59.490628] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.491128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:07.684 [2024-11-18 14:07:59.491242] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 Test: test_close_zone ...[2024-11-18 14:07:59.491663] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.491774] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.492038] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.492110] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 Test: test_finish_zone ...[2024-11-18 14:07:59.492800] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.492910] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 Test: test_append_zone ...[2024-11-18 14:07:59.493357] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:07.684 [2024-11-18 14:07:59.493416] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.493495] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:07.684 [2024-11-18 14:07:59.493539] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 [2024-11-18 14:07:59.505236] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:07.684 [2024-11-18 14:07:59.505304] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:07.684 passed 00:06:07.684 00:06:07.684 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.684 suites 1 1 n/a 0 0 00:06:07.684 tests 11 11 11 0 0 00:06:07.684 asserts 3437 3437 3437 0 n/a 00:06:07.684 00:06:07.684 Elapsed time = 0.034 seconds 00:06:07.684 14:07:59 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:07.684 00:06:07.684 00:06:07.684 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.684 http://cunit.sourceforge.net/ 00:06:07.684 00:06:07.684 00:06:07.684 Suite: bdev 00:06:07.684 Test: basic ...[2024-11-18 14:07:59.607726] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55907937a401): Operation not permitted (rc=-1) 00:06:07.684 [2024-11-18 14:07:59.608054] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55907937a3c0): Operation not permitted (rc=-1) 00:06:07.684 [2024-11-18 14:07:59.608111] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55907937a401): Operation not permitted (rc=-1) 00:06:07.684 passed 00:06:07.684 Test: unregister_and_close ...passed 00:06:07.684 Test: unregister_and_close_different_threads ...passed 00:06:07.943 Test: basic_qos ...passed 00:06:07.943 Test: put_channel_during_reset ...passed 00:06:07.943 Test: aborted_reset ...passed 00:06:07.943 Test: aborted_reset_no_outstanding_io ...passed 00:06:07.943 Test: io_during_reset ...passed 00:06:07.943 Test: reset_completions ...passed 00:06:08.202 Test: io_during_qos_queue ...passed 00:06:08.202 Test: io_during_qos_reset ...passed 00:06:08.202 Test: enomem ...passed 00:06:08.202 Test: enomem_multi_bdev ...passed 00:06:08.202 Test: enomem_multi_bdev_unregister ...passed 00:06:08.202 Test: enomem_multi_io_target ...passed 00:06:08.461 Test: qos_dynamic_enable ...passed 00:06:08.461 Test: bdev_histograms_mt ...passed 00:06:08.461 Test: bdev_set_io_timeout_mt ...[2024-11-18 14:08:00.395835] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:08.461 passed 00:06:08.461 Test: lock_lba_range_then_submit_io ...[2024-11-18 14:08:00.412354] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x55907937a380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:08.461 passed 00:06:08.461 Test: unregister_during_reset ...passed 00:06:08.461 Test: event_notify_and_close ...passed 00:06:08.720 Test: unregister_and_qos_poller ...passed 00:06:08.720 Suite: bdev_wrong_thread 00:06:08.721 Test: spdk_bdev_register_wt ...[2024-11-18 14:08:00.572800] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:08.721 passed 00:06:08.721 Test: spdk_bdev_examine_wt ...[2024-11-18 14:08:00.573072] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:08.721 passed 00:06:08.721 00:06:08.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.721 suites 2 2 n/a 0 0 00:06:08.721 tests 24 24 24 0 0 00:06:08.721 asserts 621 621 621 0 n/a 00:06:08.721 00:06:08.721 Elapsed time = 0.992 seconds 00:06:08.721 00:06:08.721 real 0m3.724s 00:06:08.721 user 0m1.689s 00:06:08.721 sys 0m2.035s 00:06:08.721 14:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.721 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.721 ************************************ 00:06:08.721 END TEST unittest_bdev 00:06:08.721 ************************************ 00:06:08.721 14:08:00 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.721 14:08:00 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.721 14:08:00 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.721 14:08:00 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.721 14:08:00 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:08.721 14:08:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.721 14:08:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.721 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.721 ************************************ 00:06:08.721 START TEST unittest_bdev_raid5f 00:06:08.721 ************************************ 00:06:08.721 14:08:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:08.721 00:06:08.721 00:06:08.721 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.721 http://cunit.sourceforge.net/ 00:06:08.721 00:06:08.721 00:06:08.721 Suite: raid5f 00:06:08.721 Test: test_raid5f_start ...passed 00:06:09.288 Test: test_raid5f_submit_read_request ...passed 00:06:09.288 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:12.575 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:27.458 Test: test_raid5f_chunk_write_error ...passed 00:06:32.730 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:35.265 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:01.810 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:01.810 00:07:01.810 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.810 suites 1 1 n/a 0 0 00:07:01.810 tests 8 8 8 0 0 00:07:01.810 asserts 351864 351864 351864 0 n/a 00:07:01.810 00:07:01.810 Elapsed time = 49.737 seconds 00:07:01.810 00:07:01.810 real 0m49.842s 00:07:01.810 user 0m46.951s 00:07:01.810 sys 0m2.871s 00:07:01.810 14:08:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.810 ************************************ 00:07:01.810 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.810 END TEST unittest_bdev_raid5f 00:07:01.810 ************************************ 00:07:01.810 14:08:50 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:07:01.810 14:08:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.810 14:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.810 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:07:01.810 ************************************ 00:07:01.810 START TEST unittest_blob_blobfs 00:07:01.810 ************************************ 00:07:01.810 14:08:50 -- common/autotest_common.sh@1114 -- # unittest_blob 00:07:01.810 14:08:50 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:01.810 14:08:50 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:01.810 00:07:01.810 00:07:01.810 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.810 http://cunit.sourceforge.net/ 00:07:01.810 00:07:01.810 00:07:01.810 Suite: blob_nocopy_noextent 00:07:01.810 Test: blob_init ...[2024-11-18 14:08:50.600901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:01.810 passed 00:07:01.810 Test: blob_thin_provision ...passed 00:07:01.810 Test: blob_read_only ...passed 00:07:01.810 Test: bs_load ...[2024-11-18 14:08:50.720258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:01.810 passed 00:07:01.810 Test: bs_load_custom_cluster_size ...passed 00:07:01.810 Test: bs_load_after_failed_grow ...passed 00:07:01.810 Test: bs_cluster_sz ...[2024-11-18 14:08:50.767013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:01.810 [2024-11-18 14:08:50.767594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:01.810 [2024-11-18 14:08:50.767811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:01.810 passed 00:07:01.810 Test: bs_resize_md ...passed 00:07:01.810 Test: bs_destroy ...passed 00:07:01.810 Test: bs_type ...passed 00:07:01.810 Test: bs_super_block ...passed 00:07:01.810 Test: bs_test_recover_cluster_count ...passed 00:07:01.810 Test: bs_grow_live ...passed 00:07:01.810 Test: bs_grow_live_no_space ...passed 00:07:01.810 Test: bs_test_grow ...passed 00:07:01.810 Test: blob_serialize_test ...passed 00:07:01.810 Test: super_block_crc ...passed 00:07:01.810 Test: blob_thin_prov_write_count_io ...passed 00:07:01.810 Test: bs_load_iter_test ...passed 00:07:01.810 Test: blob_relations ...[2024-11-18 14:08:51.007244] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-18 14:08:51.007381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-18 14:08:51.008395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-18 14:08:51.008485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 passed 00:07:01.810 Test: blob_relations2 ...[2024-11-18 14:08:51.027815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-18 14:08:51.027930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-18 14:08:51.027972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-18 14:08:51.027994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-18 14:08:51.029464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-18 14:08:51.029544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-18 14:08:51.030009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.810 [2024-11-18 14:08:51.030075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 passed 00:07:01.810 Test: blob_relations3 ...passed 00:07:01.810 Test: blobstore_clean_power_failure ...passed 00:07:01.810 Test: blob_delete_snapshot_power_failure ...[2024-11-18 14:08:51.310334] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:01.810 [2024-11-18 14:08:51.327960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:01.810 [2024-11-18 14:08:51.328048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.810 [2024-11-18 14:08:51.328096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-18 14:08:51.345851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:01.810 [2024-11-18 14:08:51.345953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:01.810 [2024-11-18 14:08:51.346011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.810 [2024-11-18 14:08:51.346057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.810 [2024-11-18 14:08:51.364631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:01.810 [2024-11-18 14:08:51.364782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.811 [2024-11-18 14:08:51.383212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:01.811 [2024-11-18 14:08:51.383363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.811 [2024-11-18 14:08:51.401869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:01.811 [2024-11-18 14:08:51.401979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.811 passed 00:07:01.811 Test: blob_create_snapshot_power_failure ...[2024-11-18 14:08:51.464607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:01.811 [2024-11-18 14:08:51.505916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:01.811 [2024-11-18 14:08:51.527751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:01.811 passed 00:07:01.811 Test: blob_io_unit ...passed 00:07:01.811 Test: blob_io_unit_compatibility ...passed 00:07:01.811 Test: blob_ext_md_pages ...passed 00:07:01.811 Test: blob_esnap_io_4096_4096 ...passed 00:07:01.811 Test: blob_esnap_io_512_512 ...passed 00:07:01.811 Test: blob_esnap_io_4096_512 ...passed 00:07:01.811 Test: blob_esnap_io_512_4096 ...passed 00:07:01.811 Suite: blob_bs_nocopy_noextent 00:07:01.811 Test: blob_open ...passed 00:07:01.811 Test: blob_create ...[2024-11-18 14:08:51.887870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:01.811 passed 00:07:01.811 Test: blob_create_loop ...passed 00:07:01.811 Test: blob_create_fail ...[2024-11-18 14:08:52.020526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:01.811 passed 00:07:01.811 Test: blob_create_internal ...passed 00:07:01.811 Test: blob_create_zero_extent ...passed 00:07:01.811 Test: blob_snapshot ...passed 00:07:01.811 Test: blob_clone ...passed 00:07:01.811 Test: blob_inflate ...[2024-11-18 14:08:52.293577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:01.811 passed 00:07:01.811 Test: blob_delete ...passed 00:07:01.811 Test: blob_resize_test ...[2024-11-18 14:08:52.387180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:01.811 passed 00:07:01.811 Test: channel_ops ...passed 00:07:01.811 Test: blob_super ...passed 00:07:01.811 Test: blob_rw_verify_iov ...passed 00:07:01.811 Test: blob_unmap ...passed 00:07:01.811 Test: blob_iter ...passed 00:07:01.811 Test: blob_parse_md ...passed 00:07:01.811 Test: bs_load_pending_removal ...passed 00:07:01.811 Test: bs_unload ...[2024-11-18 14:08:52.817235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:01.811 passed 00:07:01.811 Test: bs_usable_clusters ...passed 00:07:01.811 Test: blob_crc ...[2024-11-18 14:08:52.933296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:01.811 [2024-11-18 14:08:52.933469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:01.811 passed 00:07:01.811 Test: blob_flags ...passed 00:07:01.811 Test: bs_version ...passed 00:07:01.811 Test: blob_set_xattrs_test ...[2024-11-18 14:08:53.085245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:01.811 [2024-11-18 14:08:53.085391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:01.811 passed 00:07:01.811 Test: blob_thin_prov_alloc ...passed 00:07:01.811 Test: blob_insert_cluster_msg_test ...passed 00:07:01.811 Test: blob_thin_prov_rw ...passed 00:07:01.811 Test: blob_thin_prov_rle ...passed 00:07:01.811 Test: blob_thin_prov_rw_iov ...passed 00:07:01.811 Test: blob_snapshot_rw ...passed 00:07:01.811 Test: blob_snapshot_rw_iov ...passed 00:07:01.811 Test: blob_inflate_rw ...passed 00:07:01.811 Test: blob_snapshot_freeze_io ...passed 00:07:02.069 Test: blob_operation_split_rw ...passed 00:07:02.069 Test: blob_operation_split_rw_iov ...passed 00:07:02.328 Test: blob_simultaneous_operations ...[2024-11-18 14:08:54.147324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:02.328 [2024-11-18 14:08:54.147420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.328 [2024-11-18 14:08:54.148646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:02.328 [2024-11-18 14:08:54.148698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.328 [2024-11-18 14:08:54.159803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:02.328 [2024-11-18 14:08:54.159857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.328 [2024-11-18 14:08:54.160004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:02.328 [2024-11-18 14:08:54.160041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:02.328 passed 00:07:02.328 Test: blob_persist_test ...passed 00:07:02.328 Test: blob_decouple_snapshot ...passed 00:07:02.328 Test: blob_seek_io_unit ...passed 00:07:02.328 Test: blob_nested_freezes ...passed 00:07:02.328 Suite: blob_blob_nocopy_noextent 00:07:02.328 Test: blob_write ...passed 00:07:02.587 Test: blob_read ...passed 00:07:02.587 Test: blob_rw_verify ...passed 00:07:02.587 Test: blob_rw_verify_iov_nomem ...passed 00:07:02.587 Test: blob_rw_iov_read_only ...passed 00:07:02.587 Test: blob_xattr ...passed 00:07:02.587 Test: blob_dirty_shutdown ...passed 00:07:02.587 Test: blob_is_degraded ...passed 00:07:02.587 Suite: blob_esnap_bs_nocopy_noextent 00:07:02.587 Test: blob_esnap_create ...passed 00:07:02.845 Test: blob_esnap_thread_add_remove ...passed 00:07:02.845 Test: blob_esnap_clone_snapshot ...passed 00:07:02.845 Test: blob_esnap_clone_inflate ...passed 00:07:02.845 Test: blob_esnap_clone_decouple ...passed 00:07:02.845 Test: blob_esnap_clone_reload ...passed 00:07:02.845 Test: blob_esnap_hotplug ...passed 00:07:02.845 Suite: blob_nocopy_extent 00:07:02.846 Test: blob_init ...[2024-11-18 14:08:54.875915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:02.846 passed 00:07:02.846 Test: blob_thin_provision ...passed 00:07:02.846 Test: blob_read_only ...passed 00:07:03.105 Test: bs_load ...[2024-11-18 14:08:54.921518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:03.105 passed 00:07:03.105 Test: bs_load_custom_cluster_size ...passed 00:07:03.105 Test: bs_load_after_failed_grow ...passed 00:07:03.105 Test: bs_cluster_sz ...[2024-11-18 14:08:54.946067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:03.105 [2024-11-18 14:08:54.946368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:03.105 [2024-11-18 14:08:54.946429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:03.105 passed 00:07:03.105 Test: bs_resize_md ...passed 00:07:03.105 Test: bs_destroy ...passed 00:07:03.105 Test: bs_type ...passed 00:07:03.105 Test: bs_super_block ...passed 00:07:03.105 Test: bs_test_recover_cluster_count ...passed 00:07:03.105 Test: bs_grow_live ...passed 00:07:03.105 Test: bs_grow_live_no_space ...passed 00:07:03.105 Test: bs_test_grow ...passed 00:07:03.105 Test: blob_serialize_test ...passed 00:07:03.105 Test: super_block_crc ...passed 00:07:03.105 Test: blob_thin_prov_write_count_io ...passed 00:07:03.105 Test: bs_load_iter_test ...passed 00:07:03.105 Test: blob_relations ...[2024-11-18 14:08:55.098588] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.105 [2024-11-18 14:08:55.098698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.105 [2024-11-18 14:08:55.099606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.105 [2024-11-18 14:08:55.099676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.105 passed 00:07:03.105 Test: blob_relations2 ...[2024-11-18 14:08:55.113063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.105 [2024-11-18 14:08:55.113142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.105 [2024-11-18 14:08:55.113196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.105 [2024-11-18 14:08:55.113228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.105 [2024-11-18 14:08:55.114511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.105 [2024-11-18 14:08:55.114567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.105 [2024-11-18 14:08:55.114926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:03.105 [2024-11-18 14:08:55.114986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.105 passed 00:07:03.105 Test: blob_relations3 ...passed 00:07:03.363 Test: blobstore_clean_power_failure ...passed 00:07:03.363 Test: blob_delete_snapshot_power_failure ...[2024-11-18 14:08:55.262202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:03.364 [2024-11-18 14:08:55.274102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:03.364 [2024-11-18 14:08:55.285882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:03.364 [2024-11-18 14:08:55.285964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.364 [2024-11-18 14:08:55.285998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.364 [2024-11-18 14:08:55.297615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:03.364 [2024-11-18 14:08:55.297692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:03.364 [2024-11-18 14:08:55.297728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.364 [2024-11-18 14:08:55.297757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.364 [2024-11-18 14:08:55.309383] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:03.364 [2024-11-18 14:08:55.309459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:03.364 [2024-11-18 14:08:55.309519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:03.364 [2024-11-18 14:08:55.309566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.364 [2024-11-18 14:08:55.321260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:03.364 [2024-11-18 14:08:55.321391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.364 [2024-11-18 14:08:55.333223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:03.364 [2024-11-18 14:08:55.333344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.364 [2024-11-18 14:08:55.345359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:03.364 [2024-11-18 14:08:55.345458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.364 passed 00:07:03.364 Test: blob_create_snapshot_power_failure ...[2024-11-18 14:08:55.380465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:03.364 [2024-11-18 14:08:55.392718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:03.364 [2024-11-18 14:08:55.416427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:03.364 [2024-11-18 14:08:55.428401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:03.622 passed 00:07:03.622 Test: blob_io_unit ...passed 00:07:03.622 Test: blob_io_unit_compatibility ...passed 00:07:03.622 Test: blob_ext_md_pages ...passed 00:07:03.622 Test: blob_esnap_io_4096_4096 ...passed 00:07:03.622 Test: blob_esnap_io_512_512 ...passed 00:07:03.622 Test: blob_esnap_io_4096_512 ...passed 00:07:03.622 Test: blob_esnap_io_512_4096 ...passed 00:07:03.622 Suite: blob_bs_nocopy_extent 00:07:03.622 Test: blob_open ...passed 00:07:03.622 Test: blob_create ...[2024-11-18 14:08:55.655633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:03.622 passed 00:07:03.881 Test: blob_create_loop ...passed 00:07:03.881 Test: blob_create_fail ...[2024-11-18 14:08:55.755767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:03.881 passed 00:07:03.881 Test: blob_create_internal ...passed 00:07:03.881 Test: blob_create_zero_extent ...passed 00:07:03.881 Test: blob_snapshot ...passed 00:07:03.881 Test: blob_clone ...passed 00:07:03.881 Test: blob_inflate ...[2024-11-18 14:08:55.929370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:03.881 passed 00:07:04.139 Test: blob_delete ...passed 00:07:04.139 Test: blob_resize_test ...[2024-11-18 14:08:55.993000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:04.139 passed 00:07:04.139 Test: channel_ops ...passed 00:07:04.139 Test: blob_super ...passed 00:07:04.139 Test: blob_rw_verify_iov ...passed 00:07:04.139 Test: blob_unmap ...passed 00:07:04.139 Test: blob_iter ...passed 00:07:04.139 Test: blob_parse_md ...passed 00:07:04.398 Test: bs_load_pending_removal ...passed 00:07:04.398 Test: bs_unload ...[2024-11-18 14:08:56.261169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:04.398 passed 00:07:04.398 Test: bs_usable_clusters ...passed 00:07:04.398 Test: blob_crc ...[2024-11-18 14:08:56.324130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:04.398 [2024-11-18 14:08:56.324252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:04.398 passed 00:07:04.398 Test: blob_flags ...passed 00:07:04.398 Test: bs_version ...passed 00:07:04.398 Test: blob_set_xattrs_test ...[2024-11-18 14:08:56.417127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:04.398 [2024-11-18 14:08:56.417248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:04.398 passed 00:07:04.656 Test: blob_thin_prov_alloc ...passed 00:07:04.656 Test: blob_insert_cluster_msg_test ...passed 00:07:04.656 Test: blob_thin_prov_rw ...passed 00:07:04.656 Test: blob_thin_prov_rle ...passed 00:07:04.656 Test: blob_thin_prov_rw_iov ...passed 00:07:04.656 Test: blob_snapshot_rw ...passed 00:07:04.656 Test: blob_snapshot_rw_iov ...passed 00:07:04.915 Test: blob_inflate_rw ...passed 00:07:04.915 Test: blob_snapshot_freeze_io ...passed 00:07:05.174 Test: blob_operation_split_rw ...passed 00:07:05.174 Test: blob_operation_split_rw_iov ...passed 00:07:05.433 Test: blob_simultaneous_operations ...[2024-11-18 14:08:57.253812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.433 [2024-11-18 14:08:57.253905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-18 14:08:57.254956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.433 [2024-11-18 14:08:57.255002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-18 14:08:57.265312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.433 [2024-11-18 14:08:57.265370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 [2024-11-18 14:08:57.265470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:05.433 [2024-11-18 14:08:57.265495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.433 passed 00:07:05.433 Test: blob_persist_test ...passed 00:07:05.433 Test: blob_decouple_snapshot ...passed 00:07:05.433 Test: blob_seek_io_unit ...passed 00:07:05.433 Test: blob_nested_freezes ...passed 00:07:05.433 Suite: blob_blob_nocopy_extent 00:07:05.433 Test: blob_write ...passed 00:07:05.433 Test: blob_read ...passed 00:07:05.692 Test: blob_rw_verify ...passed 00:07:05.693 Test: blob_rw_verify_iov_nomem ...passed 00:07:05.693 Test: blob_rw_iov_read_only ...passed 00:07:05.693 Test: blob_xattr ...passed 00:07:05.693 Test: blob_dirty_shutdown ...passed 00:07:05.693 Test: blob_is_degraded ...passed 00:07:05.693 Suite: blob_esnap_bs_nocopy_extent 00:07:05.693 Test: blob_esnap_create ...passed 00:07:05.693 Test: blob_esnap_thread_add_remove ...passed 00:07:05.952 Test: blob_esnap_clone_snapshot ...passed 00:07:05.952 Test: blob_esnap_clone_inflate ...passed 00:07:05.952 Test: blob_esnap_clone_decouple ...passed 00:07:05.952 Test: blob_esnap_clone_reload ...passed 00:07:05.952 Test: blob_esnap_hotplug ...passed 00:07:05.952 Suite: blob_copy_noextent 00:07:05.952 Test: blob_init ...[2024-11-18 14:08:57.916845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:05.952 passed 00:07:05.952 Test: blob_thin_provision ...passed 00:07:05.952 Test: blob_read_only ...passed 00:07:05.952 Test: bs_load ...[2024-11-18 14:08:57.961773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:05.952 passed 00:07:05.952 Test: bs_load_custom_cluster_size ...passed 00:07:05.952 Test: bs_load_after_failed_grow ...passed 00:07:05.952 Test: bs_cluster_sz ...[2024-11-18 14:08:57.984246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:05.952 [2024-11-18 14:08:57.984439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:05.952 [2024-11-18 14:08:57.984485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:05.952 passed 00:07:05.952 Test: bs_resize_md ...passed 00:07:05.952 Test: bs_destroy ...passed 00:07:06.211 Test: bs_type ...passed 00:07:06.211 Test: bs_super_block ...passed 00:07:06.211 Test: bs_test_recover_cluster_count ...passed 00:07:06.211 Test: bs_grow_live ...passed 00:07:06.211 Test: bs_grow_live_no_space ...passed 00:07:06.211 Test: bs_test_grow ...passed 00:07:06.211 Test: blob_serialize_test ...passed 00:07:06.211 Test: super_block_crc ...passed 00:07:06.211 Test: blob_thin_prov_write_count_io ...passed 00:07:06.211 Test: bs_load_iter_test ...passed 00:07:06.211 Test: blob_relations ...[2024-11-18 14:08:58.126284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:06.211 [2024-11-18 14:08:58.126377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.211 [2024-11-18 14:08:58.126883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:06.211 [2024-11-18 14:08:58.126927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.211 passed 00:07:06.211 Test: blob_relations2 ...[2024-11-18 14:08:58.139580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:06.211 [2024-11-18 14:08:58.139647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.211 [2024-11-18 14:08:58.139672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:06.211 [2024-11-18 14:08:58.139686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.211 [2024-11-18 14:08:58.140455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:06.211 [2024-11-18 14:08:58.140511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.211 [2024-11-18 14:08:58.140759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:06.211 [2024-11-18 14:08:58.140804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.211 passed 00:07:06.211 Test: blob_relations3 ...passed 00:07:06.211 Test: blobstore_clean_power_failure ...passed 00:07:06.470 Test: blob_delete_snapshot_power_failure ...[2024-11-18 14:08:58.293532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.470 [2024-11-18 14:08:58.304772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:06.470 [2024-11-18 14:08:58.304855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:06.470 [2024-11-18 14:08:58.304889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.470 [2024-11-18 14:08:58.316165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.470 [2024-11-18 14:08:58.316234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:06.470 [2024-11-18 14:08:58.316265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:06.470 [2024-11-18 14:08:58.316287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.470 [2024-11-18 14:08:58.327637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:06.470 [2024-11-18 14:08:58.327730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.470 [2024-11-18 14:08:58.339002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:06.470 [2024-11-18 14:08:58.339106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.470 [2024-11-18 14:08:58.350412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:06.470 [2024-11-18 14:08:58.350513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:06.470 passed 00:07:06.471 Test: blob_create_snapshot_power_failure ...[2024-11-18 14:08:58.383690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:06.471 [2024-11-18 14:08:58.406633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:06.471 [2024-11-18 14:08:58.418307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:06.471 passed 00:07:06.471 Test: blob_io_unit ...passed 00:07:06.471 Test: blob_io_unit_compatibility ...passed 00:07:06.471 Test: blob_ext_md_pages ...passed 00:07:06.471 Test: blob_esnap_io_4096_4096 ...passed 00:07:06.471 Test: blob_esnap_io_512_512 ...passed 00:07:06.729 Test: blob_esnap_io_4096_512 ...passed 00:07:06.730 Test: blob_esnap_io_512_4096 ...passed 00:07:06.730 Suite: blob_bs_copy_noextent 00:07:06.730 Test: blob_open ...passed 00:07:06.730 Test: blob_create ...[2024-11-18 14:08:58.642199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:06.730 passed 00:07:06.730 Test: blob_create_loop ...passed 00:07:06.730 Test: blob_create_fail ...[2024-11-18 14:08:58.728086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.730 passed 00:07:06.730 Test: blob_create_internal ...passed 00:07:06.730 Test: blob_create_zero_extent ...passed 00:07:06.988 Test: blob_snapshot ...passed 00:07:06.988 Test: blob_clone ...passed 00:07:06.988 Test: blob_inflate ...[2024-11-18 14:08:58.892397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:06.988 passed 00:07:06.988 Test: blob_delete ...passed 00:07:06.988 Test: blob_resize_test ...[2024-11-18 14:08:58.956009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:06.988 passed 00:07:06.988 Test: channel_ops ...passed 00:07:06.988 Test: blob_super ...passed 00:07:06.988 Test: blob_rw_verify_iov ...passed 00:07:07.247 Test: blob_unmap ...passed 00:07:07.247 Test: blob_iter ...passed 00:07:07.247 Test: blob_parse_md ...passed 00:07:07.247 Test: bs_load_pending_removal ...passed 00:07:07.247 Test: bs_unload ...[2024-11-18 14:08:59.230023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:07.247 passed 00:07:07.247 Test: bs_usable_clusters ...passed 00:07:07.247 Test: blob_crc ...[2024-11-18 14:08:59.296331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:07.247 [2024-11-18 14:08:59.296450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:07.247 passed 00:07:07.507 Test: blob_flags ...passed 00:07:07.507 Test: bs_version ...passed 00:07:07.507 Test: blob_set_xattrs_test ...[2024-11-18 14:08:59.392233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:07.507 [2024-11-18 14:08:59.392367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:07.507 passed 00:07:07.507 Test: blob_thin_prov_alloc ...passed 00:07:07.507 Test: blob_insert_cluster_msg_test ...passed 00:07:07.765 Test: blob_thin_prov_rw ...passed 00:07:07.765 Test: blob_thin_prov_rle ...passed 00:07:07.765 Test: blob_thin_prov_rw_iov ...passed 00:07:07.765 Test: blob_snapshot_rw ...passed 00:07:07.765 Test: blob_snapshot_rw_iov ...passed 00:07:08.023 Test: blob_inflate_rw ...passed 00:07:08.023 Test: blob_snapshot_freeze_io ...passed 00:07:08.282 Test: blob_operation_split_rw ...passed 00:07:08.282 Test: blob_operation_split_rw_iov ...passed 00:07:08.282 Test: blob_simultaneous_operations ...[2024-11-18 14:09:00.309247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.282 [2024-11-18 14:09:00.309378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.282 [2024-11-18 14:09:00.309830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.282 [2024-11-18 14:09:00.309869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.282 [2024-11-18 14:09:00.312378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.282 [2024-11-18 14:09:00.312421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.282 [2024-11-18 14:09:00.312517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:08.282 [2024-11-18 14:09:00.312539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.282 passed 00:07:08.540 Test: blob_persist_test ...passed 00:07:08.540 Test: blob_decouple_snapshot ...passed 00:07:08.540 Test: blob_seek_io_unit ...passed 00:07:08.540 Test: blob_nested_freezes ...passed 00:07:08.540 Suite: blob_blob_copy_noextent 00:07:08.540 Test: blob_write ...passed 00:07:08.540 Test: blob_read ...passed 00:07:08.540 Test: blob_rw_verify ...passed 00:07:08.540 Test: blob_rw_verify_iov_nomem ...passed 00:07:08.799 Test: blob_rw_iov_read_only ...passed 00:07:08.799 Test: blob_xattr ...passed 00:07:08.799 Test: blob_dirty_shutdown ...passed 00:07:08.799 Test: blob_is_degraded ...passed 00:07:08.799 Suite: blob_esnap_bs_copy_noextent 00:07:08.799 Test: blob_esnap_create ...passed 00:07:08.799 Test: blob_esnap_thread_add_remove ...passed 00:07:08.799 Test: blob_esnap_clone_snapshot ...passed 00:07:08.799 Test: blob_esnap_clone_inflate ...passed 00:07:09.058 Test: blob_esnap_clone_decouple ...passed 00:07:09.058 Test: blob_esnap_clone_reload ...passed 00:07:09.058 Test: blob_esnap_hotplug ...passed 00:07:09.058 Suite: blob_copy_extent 00:07:09.058 Test: blob_init ...[2024-11-18 14:09:00.952262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:09.058 passed 00:07:09.058 Test: blob_thin_provision ...passed 00:07:09.058 Test: blob_read_only ...passed 00:07:09.058 Test: bs_load ...[2024-11-18 14:09:00.997372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:09.058 passed 00:07:09.058 Test: bs_load_custom_cluster_size ...passed 00:07:09.058 Test: bs_load_after_failed_grow ...passed 00:07:09.058 Test: bs_cluster_sz ...[2024-11-18 14:09:01.021307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:09.058 [2024-11-18 14:09:01.021490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:09.058 [2024-11-18 14:09:01.021531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:09.058 passed 00:07:09.058 Test: bs_resize_md ...passed 00:07:09.058 Test: bs_destroy ...passed 00:07:09.058 Test: bs_type ...passed 00:07:09.058 Test: bs_super_block ...passed 00:07:09.058 Test: bs_test_recover_cluster_count ...passed 00:07:09.058 Test: bs_grow_live ...passed 00:07:09.058 Test: bs_grow_live_no_space ...passed 00:07:09.058 Test: bs_test_grow ...passed 00:07:09.058 Test: blob_serialize_test ...passed 00:07:09.058 Test: super_block_crc ...passed 00:07:09.317 Test: blob_thin_prov_write_count_io ...passed 00:07:09.317 Test: bs_load_iter_test ...passed 00:07:09.317 Test: blob_relations ...[2024-11-18 14:09:01.161651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.317 [2024-11-18 14:09:01.161745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 [2024-11-18 14:09:01.162560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.317 [2024-11-18 14:09:01.162628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 passed 00:07:09.317 Test: blob_relations2 ...[2024-11-18 14:09:01.176531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.317 [2024-11-18 14:09:01.176613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 [2024-11-18 14:09:01.176656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.317 [2024-11-18 14:09:01.176682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 [2024-11-18 14:09:01.177953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.317 [2024-11-18 14:09:01.178008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 [2024-11-18 14:09:01.178368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:09.317 [2024-11-18 14:09:01.178422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 passed 00:07:09.317 Test: blob_relations3 ...passed 00:07:09.317 Test: blobstore_clean_power_failure ...passed 00:07:09.317 Test: blob_delete_snapshot_power_failure ...[2024-11-18 14:09:01.334581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:09.317 [2024-11-18 14:09:01.349132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:09.317 [2024-11-18 14:09:01.363041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:09.317 [2024-11-18 14:09:01.363193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:09.317 [2024-11-18 14:09:01.363228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.317 [2024-11-18 14:09:01.380307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:09.317 [2024-11-18 14:09:01.380387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:09.317 [2024-11-18 14:09:01.380428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:09.317 [2024-11-18 14:09:01.380451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.575 [2024-11-18 14:09:01.394389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:09.575 [2024-11-18 14:09:01.394491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:09.575 [2024-11-18 14:09:01.394529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:09.575 [2024-11-18 14:09:01.394553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.575 [2024-11-18 14:09:01.408860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:09.575 [2024-11-18 14:09:01.409006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.575 [2024-11-18 14:09:01.423744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:09.575 [2024-11-18 14:09:01.423852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.575 [2024-11-18 14:09:01.438630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:09.575 [2024-11-18 14:09:01.438729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:09.575 passed 00:07:09.575 Test: blob_create_snapshot_power_failure ...[2024-11-18 14:09:01.477679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:09.575 [2024-11-18 14:09:01.489838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:09.575 [2024-11-18 14:09:01.514298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:09.575 [2024-11-18 14:09:01.526939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:09.575 passed 00:07:09.575 Test: blob_io_unit ...passed 00:07:09.575 Test: blob_io_unit_compatibility ...passed 00:07:09.575 Test: blob_ext_md_pages ...passed 00:07:09.575 Test: blob_esnap_io_4096_4096 ...passed 00:07:09.834 Test: blob_esnap_io_512_512 ...passed 00:07:09.834 Test: blob_esnap_io_4096_512 ...passed 00:07:09.834 Test: blob_esnap_io_512_4096 ...passed 00:07:09.834 Suite: blob_bs_copy_extent 00:07:09.834 Test: blob_open ...passed 00:07:09.834 Test: blob_create ...[2024-11-18 14:09:01.772830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:09.834 passed 00:07:09.834 Test: blob_create_loop ...passed 00:07:09.834 Test: blob_create_fail ...[2024-11-18 14:09:01.868914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.834 passed 00:07:10.093 Test: blob_create_internal ...passed 00:07:10.093 Test: blob_create_zero_extent ...passed 00:07:10.093 Test: blob_snapshot ...passed 00:07:10.093 Test: blob_clone ...passed 00:07:10.093 Test: blob_inflate ...[2024-11-18 14:09:02.037793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:10.093 passed 00:07:10.093 Test: blob_delete ...passed 00:07:10.093 Test: blob_resize_test ...[2024-11-18 14:09:02.096772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:10.093 passed 00:07:10.093 Test: channel_ops ...passed 00:07:10.351 Test: blob_super ...passed 00:07:10.351 Test: blob_rw_verify_iov ...passed 00:07:10.351 Test: blob_unmap ...passed 00:07:10.351 Test: blob_iter ...passed 00:07:10.351 Test: blob_parse_md ...passed 00:07:10.351 Test: bs_load_pending_removal ...passed 00:07:10.351 Test: bs_unload ...[2024-11-18 14:09:02.350044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:10.351 passed 00:07:10.351 Test: bs_usable_clusters ...passed 00:07:10.351 Test: blob_crc ...[2024-11-18 14:09:02.424100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:10.351 [2024-11-18 14:09:02.424242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:10.610 passed 00:07:10.610 Test: blob_flags ...passed 00:07:10.610 Test: bs_version ...passed 00:07:10.610 Test: blob_set_xattrs_test ...[2024-11-18 14:09:02.524283] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:10.610 [2024-11-18 14:09:02.524399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:10.610 passed 00:07:10.610 Test: blob_thin_prov_alloc ...passed 00:07:10.869 Test: blob_insert_cluster_msg_test ...passed 00:07:10.869 Test: blob_thin_prov_rw ...passed 00:07:10.869 Test: blob_thin_prov_rle ...passed 00:07:10.869 Test: blob_thin_prov_rw_iov ...passed 00:07:10.869 Test: blob_snapshot_rw ...passed 00:07:10.869 Test: blob_snapshot_rw_iov ...passed 00:07:11.127 Test: blob_inflate_rw ...passed 00:07:11.127 Test: blob_snapshot_freeze_io ...passed 00:07:11.386 Test: blob_operation_split_rw ...passed 00:07:11.386 Test: blob_operation_split_rw_iov ...passed 00:07:11.386 Test: blob_simultaneous_operations ...[2024-11-18 14:09:03.370712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.386 [2024-11-18 14:09:03.370819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.386 [2024-11-18 14:09:03.371353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.386 [2024-11-18 14:09:03.371393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.386 [2024-11-18 14:09:03.374076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.386 [2024-11-18 14:09:03.374142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.386 [2024-11-18 14:09:03.374292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:11.386 [2024-11-18 14:09:03.374325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:11.386 passed 00:07:11.386 Test: blob_persist_test ...passed 00:07:11.650 Test: blob_decouple_snapshot ...passed 00:07:11.650 Test: blob_seek_io_unit ...passed 00:07:11.650 Test: blob_nested_freezes ...passed 00:07:11.650 Suite: blob_blob_copy_extent 00:07:11.650 Test: blob_write ...passed 00:07:11.650 Test: blob_read ...passed 00:07:11.650 Test: blob_rw_verify ...passed 00:07:11.650 Test: blob_rw_verify_iov_nomem ...passed 00:07:11.650 Test: blob_rw_iov_read_only ...passed 00:07:11.990 Test: blob_xattr ...passed 00:07:11.990 Test: blob_dirty_shutdown ...passed 00:07:11.990 Test: blob_is_degraded ...passed 00:07:11.990 Suite: blob_esnap_bs_copy_extent 00:07:11.990 Test: blob_esnap_create ...passed 00:07:11.990 Test: blob_esnap_thread_add_remove ...passed 00:07:11.990 Test: blob_esnap_clone_snapshot ...passed 00:07:11.990 Test: blob_esnap_clone_inflate ...passed 00:07:11.990 Test: blob_esnap_clone_decouple ...passed 00:07:12.248 Test: blob_esnap_clone_reload ...passed 00:07:12.248 Test: blob_esnap_hotplug ...passed 00:07:12.248 00:07:12.248 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.248 suites 16 16 n/a 0 0 00:07:12.248 tests 348 348 348 0 0 00:07:12.248 asserts 92605 92605 92605 0 n/a 00:07:12.248 00:07:12.248 Elapsed time = 13.523 seconds 00:07:12.248 14:09:04 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:12.248 00:07:12.248 00:07:12.248 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.248 http://cunit.sourceforge.net/ 00:07:12.248 00:07:12.248 00:07:12.248 Suite: blob_bdev 00:07:12.248 Test: create_bs_dev ...passed 00:07:12.248 Test: create_bs_dev_ro ...[2024-11-18 14:09:04.242890] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:12.248 passed 00:07:12.248 Test: create_bs_dev_rw ...passed 00:07:12.248 Test: claim_bs_dev ...[2024-11-18 14:09:04.243423] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:12.248 passed 00:07:12.248 Test: claim_bs_dev_ro ...passed 00:07:12.248 Test: deferred_destroy_refs ...passed 00:07:12.248 Test: deferred_destroy_channels ...passed 00:07:12.248 Test: deferred_destroy_threads ...passed 00:07:12.248 00:07:12.248 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.248 suites 1 1 n/a 0 0 00:07:12.248 tests 8 8 8 0 0 00:07:12.248 asserts 119 119 119 0 n/a 00:07:12.248 00:07:12.248 Elapsed time = 0.001 seconds 00:07:12.248 14:09:04 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:12.248 00:07:12.248 00:07:12.248 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.248 http://cunit.sourceforge.net/ 00:07:12.248 00:07:12.248 00:07:12.248 Suite: tree 00:07:12.248 Test: blobfs_tree_op_test ...passed 00:07:12.248 00:07:12.248 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.248 suites 1 1 n/a 0 0 00:07:12.248 tests 1 1 1 0 0 00:07:12.248 asserts 27 27 27 0 n/a 00:07:12.248 00:07:12.248 Elapsed time = 0.000 seconds 00:07:12.248 14:09:04 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:12.248 00:07:12.248 00:07:12.248 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.248 http://cunit.sourceforge.net/ 00:07:12.248 00:07:12.248 00:07:12.248 Suite: blobfs_async_ut 00:07:12.507 Test: fs_init ...passed 00:07:12.507 Test: fs_open ...passed 00:07:12.507 Test: fs_create ...passed 00:07:12.507 Test: fs_truncate ...passed 00:07:12.507 Test: fs_rename ...[2024-11-18 14:09:04.474333] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:12.507 passed 00:07:12.507 Test: fs_rw_async ...passed 00:07:12.507 Test: fs_writev_readv_async ...passed 00:07:12.507 Test: tree_find_buffer_ut ...passed 00:07:12.507 Test: channel_ops ...passed 00:07:12.507 Test: channel_ops_sync ...passed 00:07:12.507 00:07:12.507 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.507 suites 1 1 n/a 0 0 00:07:12.507 tests 10 10 10 0 0 00:07:12.507 asserts 292 292 292 0 n/a 00:07:12.507 00:07:12.507 Elapsed time = 0.234 seconds 00:07:12.766 14:09:04 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:12.766 00:07:12.766 00:07:12.766 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.766 http://cunit.sourceforge.net/ 00:07:12.766 00:07:12.766 00:07:12.766 Suite: blobfs_sync_ut 00:07:12.766 Test: cache_read_after_write ...[2024-11-18 14:09:04.686090] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:12.766 passed 00:07:12.766 Test: file_length ...passed 00:07:12.766 Test: append_write_to_extend_blob ...passed 00:07:12.766 Test: partial_buffer ...passed 00:07:12.766 Test: cache_write_null_buffer ...passed 00:07:12.766 Test: fs_create_sync ...passed 00:07:12.766 Test: fs_rename_sync ...passed 00:07:12.766 Test: cache_append_no_cache ...passed 00:07:13.025 Test: fs_delete_file_without_close ...passed 00:07:13.025 00:07:13.025 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.025 suites 1 1 n/a 0 0 00:07:13.025 tests 9 9 9 0 0 00:07:13.025 asserts 345 345 345 0 n/a 00:07:13.025 00:07:13.025 Elapsed time = 0.466 seconds 00:07:13.025 14:09:04 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:13.025 00:07:13.025 00:07:13.025 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.025 http://cunit.sourceforge.net/ 00:07:13.025 00:07:13.025 00:07:13.025 Suite: blobfs_bdev_ut 00:07:13.025 Test: spdk_blobfs_bdev_detect_test ...[2024-11-18 14:09:04.913057] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:13.025 passed 00:07:13.025 Test: spdk_blobfs_bdev_create_test ...[2024-11-18 14:09:04.913787] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:13.025 passed 00:07:13.025 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:13.025 00:07:13.025 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.025 suites 1 1 n/a 0 0 00:07:13.025 tests 3 3 3 0 0 00:07:13.025 asserts 9 9 9 0 n/a 00:07:13.026 00:07:13.026 Elapsed time = 0.001 seconds 00:07:13.026 00:07:13.026 real 0m14.357s 00:07:13.026 user 0m13.831s 00:07:13.026 sys 0m0.771s 00:07:13.026 14:09:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.026 14:09:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.026 ************************************ 00:07:13.026 END TEST unittest_blob_blobfs 00:07:13.026 ************************************ 00:07:13.026 14:09:04 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:07:13.026 14:09:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.026 14:09:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.026 14:09:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.026 ************************************ 00:07:13.026 START TEST unittest_event 00:07:13.026 ************************************ 00:07:13.026 14:09:04 -- common/autotest_common.sh@1114 -- # unittest_event 00:07:13.026 14:09:04 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:13.026 00:07:13.026 00:07:13.026 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.026 http://cunit.sourceforge.net/ 00:07:13.026 00:07:13.026 00:07:13.026 Suite: app_suite 00:07:13.026 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:13.026 options: 00:07:13.026 -c, --config JSON config file (default none) 00:07:13.026 app_ut: invalid option -- 'z' 00:07:13.026 --json JSON config file (default none) 00:07:13.026 --json-ignore-init-errors 00:07:13.026 don't exit on invalid config entry 00:07:13.026 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:13.026 -g, --single-file-segments 00:07:13.026 force creating just one hugetlbfs file 00:07:13.026 -h, --help show this usage 00:07:13.026 -i, --shm-id shared memory ID (optional) 00:07:13.026 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:13.026 --lcores lcore to CPU mapping list. The list is in the format: 00:07:13.026 [<,lcores[@CPUs]>...] 00:07:13.026 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:13.026 Within the group, '-' is used for range separator, 00:07:13.026 ',' is used for single number separator. 00:07:13.026 '( )' can be omitted for single element group, 00:07:13.026 '@' can be omitted if cpus and lcores have the same value 00:07:13.026 -n, --mem-channels channel number of memory channels used for DPDK 00:07:13.026 -p, --main-core main (primary) core for DPDK 00:07:13.026 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:13.026 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:13.026 --disable-cpumask-locks Disable CPU core lock files. 00:07:13.026 --silence-noticelog disable notice level logging to stderr 00:07:13.026 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:13.026 -u, --no-pci disable PCI access 00:07:13.026 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:13.026 --max-delay maximum reactor delay (in microseconds) 00:07:13.026 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:13.026 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:13.026 -R, --huge-unlink unlink huge files after initialization 00:07:13.026 -v, --version print SPDK version 00:07:13.026 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:13.026 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:13.026 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:13.026 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:13.026 Tracepoints vary in size and can use more than one trace entry. 00:07:13.026 --rpcs-allowed comma-separated list of permitted RPCS 00:07:13.026 --env-context Opaque context for use of the env implementation 00:07:13.026 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:13.026 --no-huge run without using hugepages 00:07:13.026 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:13.026 -e, --tpoint-group [:] 00:07:13.026 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:13.026 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:13.026 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:13.026 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:13.026 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:13.026 app_ut [options] 00:07:13.026 options: 00:07:13.026 -c, --config JSON config file (default none) 00:07:13.026 --json JSON config file (default none) 00:07:13.026 --json-ignore-init-errors 00:07:13.026 don't exit on invalid config entry 00:07:13.026 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:13.026 -g, --single-file-segments 00:07:13.026 force creating just one hugetlbfs file 00:07:13.026 -h, --help show this usage 00:07:13.026 -i, --shm-id shared memory ID (optional) 00:07:13.026 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:13.026 --lcores lcore to CPU mapping list. The list is in the format: 00:07:13.026 [<,lcores[@CPUs]>...] 00:07:13.026 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:13.026 Within the group, '-' is used for range separator, 00:07:13.026 ',' is used for single number separator. 00:07:13.026 '( )' can be omitted for single element group, 00:07:13.026 '@' can be omitted if cpus and lcores have the same value 00:07:13.026 -n, --mem-channels channel number of memory channels used for DPDK 00:07:13.026 -p, --main-core main (primary) core for DPDK 00:07:13.026 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:13.026 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:13.026 --disable-cpumask-locks Disable CPU core lock files. 00:07:13.026 --silence-noticelog disable notice level logging to stderr 00:07:13.026 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:13.026 -u, --no-pci disable PCI access 00:07:13.026 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:13.026 --max-delay maximum reactor delay (in microseconds) 00:07:13.026 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:13.026 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:13.026 -R, --huge-unlink unlink huge files after initialization 00:07:13.026 -v, --version print SPDK version 00:07:13.026 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:13.026 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:13.026 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:13.026 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:13.026 Tracepoints vary in size and can use more than one trace entry. 00:07:13.026 --rpcs-allowed comma-separated list of permitted RPCS 00:07:13.026 --env-context Opaque context for use of the env implementation 00:07:13.026 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:13.026 --no-huge run without using hugepages 00:07:13.026 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:13.026 -e, --tpoint-group [:] 00:07:13.026 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:13.026 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:13.026 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:13.026 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:13.026 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:13.026 app_ut: unrecognized option '--test-long-opt' 00:07:13.026 [2024-11-18 14:09:04.997370] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:13.026 app_ut [options] 00:07:13.026 options: 00:07:13.026 -c, --config JSON config file (default none) 00:07:13.026 --json JSON config file (default none) 00:07:13.026 --json-ignore-init-errors 00:07:13.027 don't exit on invalid config entry 00:07:13.027 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:13.027 -g, --single-file-segments 00:07:13.027 force creating just one hugetlbfs file 00:07:13.027 -h, --help show this usage 00:07:13.027 -i, --shm-id shared memory ID (optional) 00:07:13.027 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:13.027 --lcores lcore to CPU mapping list. The list is in the format: 00:07:13.027 [<,lcores[@CPUs]>...] 00:07:13.027 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:13.027 Within the group, '-' is used for range separator, 00:07:13.027 ',' is used for single number separator. 00:07:13.027 '( )' can be omitted for single element group, 00:07:13.027 [2024-11-18 14:09:04.997638] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:13.027 '@' can be omitted if cpus and lcores have the same value 00:07:13.027 -n, --mem-channels channel number of memory channels used for DPDK 00:07:13.027 -p, --main-core main (primary) core for DPDK 00:07:13.027 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:13.027 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:13.027 --disable-cpumask-locks Disable CPU core lock files. 00:07:13.027 --silence-noticelog disable notice level logging to stderr 00:07:13.027 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:13.027 -u, --no-pci disable PCI access 00:07:13.027 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:13.027 --max-delay maximum reactor delay (in microseconds) 00:07:13.027 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:13.027 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:13.027 -R, --huge-unlink unlink huge files after initialization 00:07:13.027 -v, --version print SPDK version 00:07:13.027 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:13.027 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:13.027 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:13.027 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:13.027 Tracepoints vary in size and can use more than one trace entry. 00:07:13.027 --rpcs-allowed comma-separated list of permitted RPCS 00:07:13.027 --env-context Opaque context for use of the env implementation 00:07:13.027 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:13.027 --no-huge run without using hugepages 00:07:13.027 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:13.027 -e, --tpoint-group [:] 00:07:13.027 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:13.027 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:13.027 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:13.027 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:13.027 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:13.027 [2024-11-18 14:09:04.997815] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:13.027 passed 00:07:13.027 00:07:13.027 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.027 suites 1 1 n/a 0 0 00:07:13.027 tests 1 1 1 0 0 00:07:13.027 asserts 8 8 8 0 n/a 00:07:13.027 00:07:13.027 Elapsed time = 0.001 seconds 00:07:13.027 14:09:05 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:13.027 00:07:13.027 00:07:13.027 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.027 http://cunit.sourceforge.net/ 00:07:13.027 00:07:13.027 00:07:13.027 Suite: app_suite 00:07:13.027 Test: test_create_reactor ...passed 00:07:13.027 Test: test_init_reactors ...passed 00:07:13.027 Test: test_event_call ...passed 00:07:13.027 Test: test_schedule_thread ...passed 00:07:13.027 Test: test_reschedule_thread ...passed 00:07:13.027 Test: test_bind_thread ...passed 00:07:13.027 Test: test_for_each_reactor ...passed 00:07:13.027 Test: test_reactor_stats ...passed 00:07:13.027 Test: test_scheduler ...passed 00:07:13.027 Test: test_governor ...passed 00:07:13.027 00:07:13.027 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.027 suites 1 1 n/a 0 0 00:07:13.027 tests 10 10 10 0 0 00:07:13.027 asserts 344 344 344 0 n/a 00:07:13.027 00:07:13.027 Elapsed time = 0.019 seconds 00:07:13.027 00:07:13.027 real 0m0.094s 00:07:13.027 user 0m0.053s 00:07:13.027 sys 0m0.042s 00:07:13.027 14:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.027 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.027 ************************************ 00:07:13.027 END TEST unittest_event 00:07:13.027 ************************************ 00:07:13.286 14:09:05 -- unit/unittest.sh@209 -- # uname -s 00:07:13.286 14:09:05 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:07:13.286 14:09:05 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:07:13.286 14:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.286 14:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.286 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.286 ************************************ 00:07:13.286 START TEST unittest_ftl 00:07:13.286 ************************************ 00:07:13.286 14:09:05 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:07:13.286 14:09:05 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:13.286 00:07:13.286 00:07:13.286 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.286 http://cunit.sourceforge.net/ 00:07:13.286 00:07:13.286 00:07:13.286 Suite: ftl_band_suite 00:07:13.286 Test: test_band_block_offset_from_addr_base ...passed 00:07:13.286 Test: test_band_block_offset_from_addr_offset ...passed 00:07:13.286 Test: test_band_addr_from_block_offset ...passed 00:07:13.286 Test: test_band_set_addr ...passed 00:07:13.286 Test: test_invalidate_addr ...passed 00:07:13.286 Test: test_next_xfer_addr ...passed 00:07:13.286 00:07:13.286 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.286 suites 1 1 n/a 0 0 00:07:13.286 tests 6 6 6 0 0 00:07:13.286 asserts 30356 30356 30356 0 n/a 00:07:13.286 00:07:13.286 Elapsed time = 0.185 seconds 00:07:13.545 14:09:05 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:13.545 00:07:13.545 00:07:13.545 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.545 http://cunit.sourceforge.net/ 00:07:13.545 00:07:13.545 00:07:13.545 Suite: ftl_bitmap 00:07:13.546 Test: test_ftl_bitmap_create ...[2024-11-18 14:09:05.418958] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:13.546 [2024-11-18 14:09:05.419473] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:13.546 passed 00:07:13.546 Test: test_ftl_bitmap_get ...passed 00:07:13.546 Test: test_ftl_bitmap_set ...passed 00:07:13.546 Test: test_ftl_bitmap_clear ...passed 00:07:13.546 Test: test_ftl_bitmap_find_first_set ...passed 00:07:13.546 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:13.546 Test: test_ftl_bitmap_count_set ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 7 7 7 0 0 00:07:13.546 asserts 137 137 137 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.001 seconds 00:07:13.546 14:09:05 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:13.546 00:07:13.546 00:07:13.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.546 http://cunit.sourceforge.net/ 00:07:13.546 00:07:13.546 00:07:13.546 Suite: ftl_io_suite 00:07:13.546 Test: test_completion ...passed 00:07:13.546 Test: test_multiple_ios ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 2 2 2 0 0 00:07:13.546 asserts 47 47 47 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.003 seconds 00:07:13.546 14:09:05 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:13.546 00:07:13.546 00:07:13.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.546 http://cunit.sourceforge.net/ 00:07:13.546 00:07:13.546 00:07:13.546 Suite: ftl_mngt 00:07:13.546 Test: test_next_step ...passed 00:07:13.546 Test: test_continue_step ...passed 00:07:13.546 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:13.546 Test: test_fail_step ...passed 00:07:13.546 Test: test_mngt_call_and_call_rollback ...passed 00:07:13.546 Test: test_nested_process_failure ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 6 6 6 0 0 00:07:13.546 asserts 176 176 176 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.002 seconds 00:07:13.546 14:09:05 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:13.546 00:07:13.546 00:07:13.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.546 http://cunit.sourceforge.net/ 00:07:13.546 00:07:13.546 00:07:13.546 Suite: ftl_mempool 00:07:13.546 Test: test_ftl_mempool_create ...passed 00:07:13.546 Test: test_ftl_mempool_get_put ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 2 2 2 0 0 00:07:13.546 asserts 36 36 36 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.000 seconds 00:07:13.546 14:09:05 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:13.546 00:07:13.546 00:07:13.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.546 http://cunit.sourceforge.net/ 00:07:13.546 00:07:13.546 00:07:13.546 Suite: ftl_addr64_suite 00:07:13.546 Test: test_addr_cached ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 1 1 1 0 0 00:07:13.546 asserts 1536 1536 1536 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.000 seconds 00:07:13.546 14:09:05 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:13.546 00:07:13.546 00:07:13.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.546 http://cunit.sourceforge.net/ 00:07:13.546 00:07:13.546 00:07:13.546 Suite: ftl_sb 00:07:13.546 Test: test_sb_crc_v2 ...passed 00:07:13.546 Test: test_sb_crc_v3 ...passed 00:07:13.546 Test: test_sb_v3_md_layout ...[2024-11-18 14:09:05.571450] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:13.546 [2024-11-18 14:09:05.571862] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:13.546 [2024-11-18 14:09:05.572009] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:13.546 [2024-11-18 14:09:05.572155] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:13.546 [2024-11-18 14:09:05.572283] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:13.546 [2024-11-18 14:09:05.572399] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:13.546 [2024-11-18 14:09:05.572543] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:13.546 [2024-11-18 14:09:05.572690] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:13.546 [2024-11-18 14:09:05.572794] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:13.546 [2024-11-18 14:09:05.572935] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:13.546 [2024-11-18 14:09:05.573074] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:13.546 passed 00:07:13.546 Test: test_sb_v5_md_layout ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 4 4 4 0 0 00:07:13.546 asserts 148 148 148 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.002 seconds 00:07:13.546 14:09:05 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:13.546 00:07:13.546 00:07:13.546 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.546 http://cunit.sourceforge.net/ 00:07:13.546 00:07:13.546 00:07:13.546 Suite: ftl_layout_upgrade 00:07:13.546 Test: test_l2p_upgrade ...passed 00:07:13.546 00:07:13.546 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.546 suites 1 1 n/a 0 0 00:07:13.546 tests 1 1 1 0 0 00:07:13.546 asserts 140 140 140 0 n/a 00:07:13.546 00:07:13.546 Elapsed time = 0.001 seconds 00:07:13.805 00:07:13.805 real 0m0.488s 00:07:13.805 user 0m0.215s 00:07:13.805 sys 0m0.265s 00:07:13.805 14:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.805 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.805 ************************************ 00:07:13.805 END TEST unittest_ftl 00:07:13.805 ************************************ 00:07:13.805 14:09:05 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:13.806 14:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.806 14:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.806 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 ************************************ 00:07:13.806 START TEST unittest_accel 00:07:13.806 ************************************ 00:07:13.806 14:09:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:13.806 00:07:13.806 00:07:13.806 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.806 http://cunit.sourceforge.net/ 00:07:13.806 00:07:13.806 00:07:13.806 Suite: accel_sequence 00:07:13.806 Test: test_sequence_fill_copy ...passed 00:07:13.806 Test: test_sequence_abort ...passed 00:07:13.806 Test: test_sequence_append_error ...passed 00:07:13.806 Test: test_sequence_completion_error ...[2024-11-18 14:09:05.706236] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f3f4edc27c0 00:07:13.806 [2024-11-18 14:09:05.706748] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f3f4edc27c0 00:07:13.806 [2024-11-18 14:09:05.706956] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f3f4edc27c0 00:07:13.806 [2024-11-18 14:09:05.707148] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f3f4edc27c0 00:07:13.806 passed 00:07:13.806 Test: test_sequence_decompress ...passed 00:07:13.806 Test: test_sequence_reverse ...passed 00:07:13.806 Test: test_sequence_copy_elision ...passed 00:07:13.806 Test: test_sequence_accel_buffers ...passed 00:07:13.806 Test: test_sequence_memory_domain ...[2024-11-18 14:09:05.720362] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:13.806 [2024-11-18 14:09:05.720751] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:13.806 passed 00:07:13.806 Test: test_sequence_module_memory_domain ...passed 00:07:13.806 Test: test_sequence_crypto ...passed 00:07:13.806 Test: test_sequence_driver ...[2024-11-18 14:09:05.728504] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f3f4e19a7c0 using driver: ut 00:07:13.806 [2024-11-18 14:09:05.728789] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f3f4e19a7c0 through driver: ut 00:07:13.806 passed 00:07:13.806 Test: test_sequence_same_iovs ...passed 00:07:13.806 Test: test_sequence_crc32 ...passed 00:07:13.806 Suite: accel 00:07:13.806 Test: test_spdk_accel_task_complete ...passed 00:07:13.806 Test: test_get_task ...passed 00:07:13.806 Test: test_spdk_accel_submit_copy ...passed 00:07:13.806 Test: test_spdk_accel_submit_dualcast ...[2024-11-18 14:09:05.735468] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:13.806 [2024-11-18 14:09:05.735636] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:13.806 passed 00:07:13.806 Test: test_spdk_accel_submit_compare ...passed 00:07:13.806 Test: test_spdk_accel_submit_fill ...passed 00:07:13.806 Test: test_spdk_accel_submit_crc32c ...passed 00:07:13.806 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:13.806 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:13.806 Test: test_spdk_accel_submit_xor ...passed 00:07:13.806 Test: test_spdk_accel_module_find_by_name ...passed 00:07:13.806 Test: test_spdk_accel_module_register ...passed 00:07:13.806 00:07:13.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.806 suites 2 2 n/a 0 0 00:07:13.806 tests 26 26 26 0 0 00:07:13.806 asserts 831 831 831 0 n/a 00:07:13.806 00:07:13.806 Elapsed time = 0.038 seconds 00:07:13.806 00:07:13.806 real 0m0.086s 00:07:13.806 user 0m0.051s 00:07:13.806 sys 0m0.030s 00:07:13.806 14:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.806 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 ************************************ 00:07:13.806 END TEST unittest_accel 00:07:13.806 ************************************ 00:07:13.806 14:09:05 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:13.806 14:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.806 14:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.806 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 ************************************ 00:07:13.806 START TEST unittest_ioat 00:07:13.806 ************************************ 00:07:13.806 14:09:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:13.806 00:07:13.806 00:07:13.806 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.806 http://cunit.sourceforge.net/ 00:07:13.806 00:07:13.806 00:07:13.806 Suite: ioat 00:07:13.806 Test: ioat_state_check ...passed 00:07:13.806 00:07:13.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.806 suites 1 1 n/a 0 0 00:07:13.806 tests 1 1 1 0 0 00:07:13.806 asserts 32 32 32 0 n/a 00:07:13.806 00:07:13.806 Elapsed time = 0.000 seconds 00:07:13.806 00:07:13.806 real 0m0.029s 00:07:13.806 user 0m0.016s 00:07:13.806 sys 0m0.012s 00:07:13.806 14:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.806 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:13.806 ************************************ 00:07:13.806 END TEST unittest_ioat 00:07:13.806 ************************************ 00:07:14.068 14:09:05 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:14.068 14:09:05 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:14.068 14:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.068 14:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.068 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:14.068 ************************************ 00:07:14.068 START TEST unittest_idxd_user 00:07:14.068 ************************************ 00:07:14.068 14:09:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:14.068 00:07:14.068 00:07:14.068 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.068 http://cunit.sourceforge.net/ 00:07:14.068 00:07:14.068 00:07:14.068 Suite: idxd_user 00:07:14.068 Test: test_idxd_wait_cmd ...[2024-11-18 14:09:05.918560] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:14.068 [2024-11-18 14:09:05.918969] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:14.068 passed 00:07:14.068 Test: test_idxd_reset_dev ...[2024-11-18 14:09:05.919481] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:14.068 [2024-11-18 14:09:05.919670] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:14.068 passed 00:07:14.068 Test: test_idxd_group_config ...passed 00:07:14.068 Test: test_idxd_wq_config ...passed 00:07:14.068 00:07:14.068 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.068 suites 1 1 n/a 0 0 00:07:14.068 tests 4 4 4 0 0 00:07:14.068 asserts 20 20 20 0 n/a 00:07:14.068 00:07:14.068 Elapsed time = 0.001 seconds 00:07:14.068 00:07:14.068 real 0m0.030s 00:07:14.068 user 0m0.017s 00:07:14.068 sys 0m0.012s 00:07:14.068 14:09:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.068 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:14.068 ************************************ 00:07:14.068 END TEST unittest_idxd_user 00:07:14.068 ************************************ 00:07:14.068 14:09:05 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:07:14.068 14:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.068 14:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.068 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:07:14.068 ************************************ 00:07:14.068 START TEST unittest_iscsi 00:07:14.068 ************************************ 00:07:14.068 14:09:05 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:07:14.068 14:09:05 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:14.068 00:07:14.068 00:07:14.068 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.068 http://cunit.sourceforge.net/ 00:07:14.068 00:07:14.068 00:07:14.068 Suite: conn_suite 00:07:14.068 Test: read_task_split_in_order_case ...passed 00:07:14.068 Test: read_task_split_reverse_order_case ...passed 00:07:14.068 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:14.068 Test: process_non_read_task_completion_test ...passed 00:07:14.068 Test: free_tasks_on_connection ...passed 00:07:14.068 Test: free_tasks_with_queued_datain ...passed 00:07:14.068 Test: abort_queued_datain_task_test ...passed 00:07:14.068 Test: abort_queued_datain_tasks_test ...passed 00:07:14.068 00:07:14.068 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.068 suites 1 1 n/a 0 0 00:07:14.068 tests 8 8 8 0 0 00:07:14.068 asserts 230 230 230 0 n/a 00:07:14.068 00:07:14.068 Elapsed time = 0.000 seconds 00:07:14.068 14:09:06 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:14.068 00:07:14.068 00:07:14.068 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.068 http://cunit.sourceforge.net/ 00:07:14.068 00:07:14.068 00:07:14.068 Suite: iscsi_suite 00:07:14.068 Test: param_negotiation_test ...passed 00:07:14.068 Test: list_negotiation_test ...passed 00:07:14.068 Test: parse_valid_test ...passed 00:07:14.068 Test: parse_invalid_test ...[2024-11-18 14:09:06.053480] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:14.068 [2024-11-18 14:09:06.053949] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:14.068 [2024-11-18 14:09:06.054149] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:14.068 [2024-11-18 14:09:06.054350] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:14.068 [2024-11-18 14:09:06.054633] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:14.069 [2024-11-18 14:09:06.054814] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:14.069 [2024-11-18 14:09:06.055099] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:14.069 passed 00:07:14.069 00:07:14.069 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.069 suites 1 1 n/a 0 0 00:07:14.069 tests 4 4 4 0 0 00:07:14.069 asserts 161 161 161 0 n/a 00:07:14.069 00:07:14.069 Elapsed time = 0.006 seconds 00:07:14.069 14:09:06 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:14.069 00:07:14.069 00:07:14.069 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.069 http://cunit.sourceforge.net/ 00:07:14.069 00:07:14.069 00:07:14.069 Suite: iscsi_target_node_suite 00:07:14.069 Test: add_lun_test_cases ...[2024-11-18 14:09:06.089104] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:14.069 [2024-11-18 14:09:06.089612] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:14.069 [2024-11-18 14:09:06.089889] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:14.069 [2024-11-18 14:09:06.090088] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:14.069 [2024-11-18 14:09:06.090271] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:14.069 passed 00:07:14.069 Test: allow_any_allowed ...passed 00:07:14.069 Test: allow_ipv6_allowed ...passed 00:07:14.069 Test: allow_ipv6_denied ...passed 00:07:14.069 Test: allow_ipv6_invalid ...passed 00:07:14.069 Test: allow_ipv4_allowed ...passed 00:07:14.069 Test: allow_ipv4_denied ...passed 00:07:14.069 Test: allow_ipv4_invalid ...passed 00:07:14.069 Test: node_access_allowed ...passed 00:07:14.069 Test: node_access_denied_by_empty_netmask ...passed 00:07:14.069 Test: node_access_multi_initiator_groups_cases ...passed 00:07:14.069 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:14.069 Test: chap_param_test_cases ...[2024-11-18 14:09:06.092731] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:14.069 [2024-11-18 14:09:06.092933] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:14.069 [2024-11-18 14:09:06.093145] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:14.069 [2024-11-18 14:09:06.093359] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:14.069 [2024-11-18 14:09:06.093554] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:14.069 passed 00:07:14.069 00:07:14.069 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.069 suites 1 1 n/a 0 0 00:07:14.069 tests 13 13 13 0 0 00:07:14.069 asserts 50 50 50 0 n/a 00:07:14.069 00:07:14.069 Elapsed time = 0.002 seconds 00:07:14.069 14:09:06 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:14.069 00:07:14.069 00:07:14.069 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.069 http://cunit.sourceforge.net/ 00:07:14.069 00:07:14.069 00:07:14.069 Suite: iscsi_suite 00:07:14.069 Test: op_login_check_target_test ...[2024-11-18 14:09:06.131064] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:14.069 passed 00:07:14.069 Test: op_login_session_normal_test ...[2024-11-18 14:09:06.131941] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:14.069 [2024-11-18 14:09:06.132158] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:14.069 [2024-11-18 14:09:06.132351] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:14.069 [2024-11-18 14:09:06.132587] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:14.069 [2024-11-18 14:09:06.132871] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:14.069 [2024-11-18 14:09:06.133168] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:14.069 [2024-11-18 14:09:06.133386] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:14.069 passed 00:07:14.069 Test: maxburstlength_test ...[2024-11-18 14:09:06.134044] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:14.069 [2024-11-18 14:09:06.134268] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:14.069 passed 00:07:14.069 Test: underflow_for_read_transfer_test ...passed 00:07:14.069 Test: underflow_for_zero_read_transfer_test ...passed 00:07:14.069 Test: underflow_for_request_sense_test ...passed 00:07:14.069 Test: underflow_for_check_condition_test ...passed 00:07:14.069 Test: add_transfer_task_test ...passed 00:07:14.069 Test: get_transfer_task_test ...passed 00:07:14.069 Test: del_transfer_task_test ...passed 00:07:14.069 Test: clear_all_transfer_tasks_test ...passed 00:07:14.069 Test: build_iovs_test ...passed 00:07:14.069 Test: build_iovs_with_md_test ...passed 00:07:14.069 Test: pdu_hdr_op_login_test ...[2024-11-18 14:09:06.138388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:14.069 [2024-11-18 14:09:06.138683] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:14.069 [2024-11-18 14:09:06.138928] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:14.069 passed 00:07:14.069 Test: pdu_hdr_op_text_test ...[2024-11-18 14:09:06.139426] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:14.069 [2024-11-18 14:09:06.139692] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:14.069 [2024-11-18 14:09:06.139875] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:14.069 passed 00:07:14.069 Test: pdu_hdr_op_logout_test ...[2024-11-18 14:09:06.140337] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:14.330 passed 00:07:14.330 Test: pdu_hdr_op_scsi_test ...[2024-11-18 14:09:06.140933] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:14.330 [2024-11-18 14:09:06.141122] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:14.330 [2024-11-18 14:09:06.141328] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:14.330 [2024-11-18 14:09:06.141590] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:14.330 [2024-11-18 14:09:06.141854] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:14.330 [2024-11-18 14:09:06.142186] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:14.330 passed 00:07:14.330 Test: pdu_hdr_op_task_mgmt_test ...[2024-11-18 14:09:06.142666] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:14.330 [2024-11-18 14:09:06.142900] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:14.330 passed 00:07:14.330 Test: pdu_hdr_op_nopout_test ...[2024-11-18 14:09:06.143522] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:14.330 [2024-11-18 14:09:06.143792] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:14.330 [2024-11-18 14:09:06.143991] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:14.330 [2024-11-18 14:09:06.144153] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:14.330 passed 00:07:14.330 Test: pdu_hdr_op_data_test ...[2024-11-18 14:09:06.144547] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:14.330 [2024-11-18 14:09:06.144761] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:14.330 [2024-11-18 14:09:06.145287] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:14.330 [2024-11-18 14:09:06.145561] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:14.330 [2024-11-18 14:09:06.145766] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:14.330 [2024-11-18 14:09:06.146021] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:14.330 [2024-11-18 14:09:06.146205] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:14.330 passed 00:07:14.330 Test: empty_text_with_cbit_test ...passed 00:07:14.330 Test: pdu_payload_read_test ...[2024-11-18 14:09:06.149052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:14.330 passed 00:07:14.330 Test: data_out_pdu_sequence_test ...passed 00:07:14.330 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:14.330 00:07:14.330 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.330 suites 1 1 n/a 0 0 00:07:14.330 tests 24 24 24 0 0 00:07:14.330 asserts 150253 150253 150253 0 n/a 00:07:14.330 00:07:14.330 Elapsed time = 0.020 seconds 00:07:14.330 14:09:06 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:14.330 00:07:14.330 00:07:14.330 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.330 http://cunit.sourceforge.net/ 00:07:14.330 00:07:14.330 00:07:14.330 Suite: init_grp_suite 00:07:14.330 Test: create_initiator_group_success_case ...passed 00:07:14.330 Test: find_initiator_group_success_case ...passed 00:07:14.330 Test: register_initiator_group_twice_case ...passed 00:07:14.330 Test: add_initiator_name_success_case ...passed 00:07:14.330 Test: add_initiator_name_fail_case ...[2024-11-18 14:09:06.197778] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:14.330 passed 00:07:14.330 Test: delete_all_initiator_names_success_case ...passed 00:07:14.330 Test: add_netmask_success_case ...passed 00:07:14.330 Test: add_netmask_fail_case ...[2024-11-18 14:09:06.198826] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:14.330 passed 00:07:14.330 Test: delete_all_netmasks_success_case ...passed 00:07:14.330 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:14.330 Test: netmask_overwrite_all_to_any_case ...passed 00:07:14.330 Test: add_delete_initiator_names_case ...passed 00:07:14.330 Test: add_duplicated_initiator_names_case ...passed 00:07:14.330 Test: delete_nonexisting_initiator_names_case ...passed 00:07:14.330 Test: add_delete_netmasks_case ...passed 00:07:14.330 Test: add_duplicated_netmasks_case ...passed 00:07:14.330 Test: delete_nonexisting_netmasks_case ...passed 00:07:14.330 00:07:14.330 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.330 suites 1 1 n/a 0 0 00:07:14.330 tests 17 17 17 0 0 00:07:14.330 asserts 108 108 108 0 n/a 00:07:14.330 00:07:14.330 Elapsed time = 0.002 seconds 00:07:14.330 14:09:06 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:14.330 00:07:14.330 00:07:14.330 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.330 http://cunit.sourceforge.net/ 00:07:14.330 00:07:14.330 00:07:14.330 Suite: portal_grp_suite 00:07:14.331 Test: portal_create_ipv4_normal_case ...passed 00:07:14.331 Test: portal_create_ipv6_normal_case ...passed 00:07:14.331 Test: portal_create_ipv4_wildcard_case ...passed 00:07:14.331 Test: portal_create_ipv6_wildcard_case ...passed 00:07:14.331 Test: portal_create_twice_case ...[2024-11-18 14:09:06.236844] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:14.331 passed 00:07:14.331 Test: portal_grp_register_unregister_case ...passed 00:07:14.331 Test: portal_grp_register_twice_case ...passed 00:07:14.331 Test: portal_grp_add_delete_case ...passed 00:07:14.331 Test: portal_grp_add_delete_twice_case ...passed 00:07:14.331 00:07:14.331 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.331 suites 1 1 n/a 0 0 00:07:14.331 tests 9 9 9 0 0 00:07:14.331 asserts 44 44 44 0 n/a 00:07:14.331 00:07:14.331 Elapsed time = 0.004 seconds 00:07:14.331 00:07:14.331 real 0m0.266s 00:07:14.331 user 0m0.163s 00:07:14.331 sys 0m0.081s 00:07:14.331 14:09:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.331 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.331 ************************************ 00:07:14.331 END TEST unittest_iscsi 00:07:14.331 ************************************ 00:07:14.331 14:09:06 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:07:14.331 14:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.331 14:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.331 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.331 ************************************ 00:07:14.331 START TEST unittest_json 00:07:14.331 ************************************ 00:07:14.331 14:09:06 -- common/autotest_common.sh@1114 -- # unittest_json 00:07:14.331 14:09:06 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:14.331 00:07:14.331 00:07:14.331 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.331 http://cunit.sourceforge.net/ 00:07:14.331 00:07:14.331 00:07:14.331 Suite: json 00:07:14.331 Test: test_parse_literal ...passed 00:07:14.331 Test: test_parse_string_simple ...passed 00:07:14.331 Test: test_parse_string_control_chars ...passed 00:07:14.331 Test: test_parse_string_utf8 ...passed 00:07:14.331 Test: test_parse_string_escapes_twochar ...passed 00:07:14.331 Test: test_parse_string_escapes_unicode ...passed 00:07:14.331 Test: test_parse_number ...passed 00:07:14.331 Test: test_parse_array ...passed 00:07:14.331 Test: test_parse_object ...passed 00:07:14.331 Test: test_parse_nesting ...passed 00:07:14.331 Test: test_parse_comment ...passed 00:07:14.331 00:07:14.331 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.331 suites 1 1 n/a 0 0 00:07:14.331 tests 11 11 11 0 0 00:07:14.331 asserts 1516 1516 1516 0 n/a 00:07:14.331 00:07:14.331 Elapsed time = 0.002 seconds 00:07:14.331 14:09:06 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:14.331 00:07:14.331 00:07:14.331 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.331 http://cunit.sourceforge.net/ 00:07:14.331 00:07:14.331 00:07:14.331 Suite: json 00:07:14.331 Test: test_strequal ...passed 00:07:14.331 Test: test_num_to_uint16 ...passed 00:07:14.331 Test: test_num_to_int32 ...passed 00:07:14.331 Test: test_num_to_uint64 ...passed 00:07:14.331 Test: test_decode_object ...passed 00:07:14.331 Test: test_decode_array ...passed 00:07:14.331 Test: test_decode_bool ...passed 00:07:14.331 Test: test_decode_uint16 ...passed 00:07:14.331 Test: test_decode_int32 ...passed 00:07:14.331 Test: test_decode_uint32 ...passed 00:07:14.331 Test: test_decode_uint64 ...passed 00:07:14.331 Test: test_decode_string ...passed 00:07:14.331 Test: test_decode_uuid ...passed 00:07:14.331 Test: test_find ...passed 00:07:14.331 Test: test_find_array ...passed 00:07:14.331 Test: test_iterating ...passed 00:07:14.331 Test: test_free_object ...passed 00:07:14.331 00:07:14.331 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.331 suites 1 1 n/a 0 0 00:07:14.331 tests 17 17 17 0 0 00:07:14.331 asserts 236 236 236 0 n/a 00:07:14.331 00:07:14.331 Elapsed time = 0.001 seconds 00:07:14.331 14:09:06 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:14.331 00:07:14.331 00:07:14.331 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.331 http://cunit.sourceforge.net/ 00:07:14.331 00:07:14.331 00:07:14.331 Suite: json 00:07:14.331 Test: test_write_literal ...passed 00:07:14.331 Test: test_write_string_simple ...passed 00:07:14.331 Test: test_write_string_escapes ...passed 00:07:14.331 Test: test_write_string_utf16le ...passed 00:07:14.331 Test: test_write_number_int32 ...passed 00:07:14.331 Test: test_write_number_uint32 ...passed 00:07:14.331 Test: test_write_number_uint128 ...passed 00:07:14.331 Test: test_write_string_number_uint128 ...passed 00:07:14.331 Test: test_write_number_int64 ...passed 00:07:14.331 Test: test_write_number_uint64 ...passed 00:07:14.331 Test: test_write_number_double ...passed 00:07:14.591 Test: test_write_uuid ...passed 00:07:14.591 Test: test_write_array ...passed 00:07:14.591 Test: test_write_object ...passed 00:07:14.591 Test: test_write_nesting ...passed 00:07:14.591 Test: test_write_val ...passed 00:07:14.591 00:07:14.591 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.591 suites 1 1 n/a 0 0 00:07:14.591 tests 16 16 16 0 0 00:07:14.591 asserts 918 918 918 0 n/a 00:07:14.591 00:07:14.591 Elapsed time = 0.005 seconds 00:07:14.591 14:09:06 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:14.591 00:07:14.591 00:07:14.591 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.591 http://cunit.sourceforge.net/ 00:07:14.591 00:07:14.591 00:07:14.591 Suite: jsonrpc 00:07:14.591 Test: test_parse_request ...passed 00:07:14.591 Test: test_parse_request_streaming ...passed 00:07:14.591 00:07:14.591 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.591 suites 1 1 n/a 0 0 00:07:14.591 tests 2 2 2 0 0 00:07:14.591 asserts 289 289 289 0 n/a 00:07:14.591 00:07:14.591 Elapsed time = 0.003 seconds 00:07:14.591 00:07:14.591 real 0m0.133s 00:07:14.591 user 0m0.080s 00:07:14.591 sys 0m0.046s 00:07:14.591 14:09:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.591 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 END TEST unittest_json 00:07:14.591 ************************************ 00:07:14.591 14:09:06 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:07:14.591 14:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.591 14:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.591 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 START TEST unittest_rpc 00:07:14.591 ************************************ 00:07:14.591 14:09:06 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:07:14.591 14:09:06 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:14.591 00:07:14.591 00:07:14.591 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.591 http://cunit.sourceforge.net/ 00:07:14.591 00:07:14.591 00:07:14.591 Suite: rpc 00:07:14.591 Test: test_jsonrpc_handler ...passed 00:07:14.591 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:14.591 Test: test_rpc_get_methods ...[2024-11-18 14:09:06.522457] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:14.591 passed 00:07:14.591 Test: test_rpc_spdk_get_version ...passed 00:07:14.591 Test: test_spdk_rpc_listen_close ...passed 00:07:14.591 00:07:14.591 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.591 suites 1 1 n/a 0 0 00:07:14.591 tests 5 5 5 0 0 00:07:14.591 asserts 20 20 20 0 n/a 00:07:14.591 00:07:14.591 Elapsed time = 0.001 seconds 00:07:14.591 00:07:14.591 real 0m0.035s 00:07:14.591 user 0m0.020s 00:07:14.591 sys 0m0.014s 00:07:14.591 14:09:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.591 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 END TEST unittest_rpc 00:07:14.591 ************************************ 00:07:14.591 14:09:06 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:14.591 14:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.591 14:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.591 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 START TEST unittest_notify 00:07:14.591 ************************************ 00:07:14.591 14:09:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:14.591 00:07:14.591 00:07:14.591 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.591 http://cunit.sourceforge.net/ 00:07:14.591 00:07:14.591 00:07:14.591 Suite: app_suite 00:07:14.591 Test: notify ...passed 00:07:14.591 00:07:14.591 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.591 suites 1 1 n/a 0 0 00:07:14.591 tests 1 1 1 0 0 00:07:14.591 asserts 13 13 13 0 n/a 00:07:14.591 00:07:14.591 Elapsed time = 0.000 seconds 00:07:14.591 00:07:14.591 real 0m0.031s 00:07:14.591 user 0m0.021s 00:07:14.591 sys 0m0.009s 00:07:14.591 14:09:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.591 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.591 ************************************ 00:07:14.591 END TEST unittest_notify 00:07:14.591 ************************************ 00:07:14.851 14:09:06 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:07:14.851 14:09:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:14.851 14:09:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.851 14:09:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.851 ************************************ 00:07:14.851 START TEST unittest_nvme 00:07:14.851 ************************************ 00:07:14.851 14:09:06 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:07:14.851 14:09:06 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:14.851 00:07:14.851 00:07:14.851 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.851 http://cunit.sourceforge.net/ 00:07:14.851 00:07:14.851 00:07:14.851 Suite: nvme 00:07:14.851 Test: test_opc_data_transfer ...passed 00:07:14.851 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:14.851 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:14.851 Test: test_trid_parse_and_compare ...[2024-11-18 14:09:06.701321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:14.851 [2024-11-18 14:09:06.701749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:14.851 [2024-11-18 14:09:06.701990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:14.851 [2024-11-18 14:09:06.702155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:14.851 [2024-11-18 14:09:06.702241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:14.851 [2024-11-18 14:09:06.702380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:14.851 passed 00:07:14.851 Test: test_trid_trtype_str ...passed 00:07:14.851 Test: test_trid_adrfam_str ...passed 00:07:14.851 Test: test_nvme_ctrlr_probe ...[2024-11-18 14:09:06.703278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:14.851 passed 00:07:14.851 Test: test_spdk_nvme_probe ...[2024-11-18 14:09:06.703760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:14.851 [2024-11-18 14:09:06.703932] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:14.851 [2024-11-18 14:09:06.704174] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:14.851 [2024-11-18 14:09:06.704370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:14.851 passed 00:07:14.851 Test: test_spdk_nvme_connect ...[2024-11-18 14:09:06.704750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:14.851 [2024-11-18 14:09:06.705255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:14.851 [2024-11-18 14:09:06.705474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:14.851 passed 00:07:14.851 Test: test_nvme_ctrlr_probe_internal ...[2024-11-18 14:09:06.705897] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:14.851 [2024-11-18 14:09:06.706085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:14.851 passed 00:07:14.851 Test: test_nvme_init_controllers ...[2024-11-18 14:09:06.706471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:14.851 passed 00:07:14.851 Test: test_nvme_driver_init ...[2024-11-18 14:09:06.706923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:14.851 [2024-11-18 14:09:06.707088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:14.851 [2024-11-18 14:09:06.822274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:14.851 [2024-11-18 14:09:06.822679] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:14.851 passed 00:07:14.851 Test: test_spdk_nvme_detach ...passed 00:07:14.851 Test: test_nvme_completion_poll_cb ...passed 00:07:14.851 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:14.851 Test: test_nvme_allocate_request_null ...passed 00:07:14.851 Test: test_nvme_allocate_request ...passed 00:07:14.851 Test: test_nvme_free_request ...passed 00:07:14.851 Test: test_nvme_allocate_request_user_copy ...passed 00:07:14.851 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:14.851 Test: test_nvme_request_check_timeout ...passed 00:07:14.851 Test: test_nvme_wait_for_completion ...passed 00:07:14.851 Test: test_spdk_nvme_parse_func ...passed 00:07:14.851 Test: test_spdk_nvme_detach_async ...passed 00:07:14.851 Test: test_nvme_parse_addr ...[2024-11-18 14:09:06.827298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:14.851 passed 00:07:14.851 00:07:14.852 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.852 suites 1 1 n/a 0 0 00:07:14.852 tests 25 25 25 0 0 00:07:14.852 asserts 326 326 326 0 n/a 00:07:14.852 00:07:14.852 Elapsed time = 0.008 seconds 00:07:14.852 14:09:06 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:14.852 00:07:14.852 00:07:14.852 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.852 http://cunit.sourceforge.net/ 00:07:14.852 00:07:14.852 00:07:14.852 Suite: nvme_ctrlr 00:07:14.852 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-18 14:09:06.857587] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 passed 00:07:14.852 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-18 14:09:06.859848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 passed 00:07:14.852 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-18 14:09:06.861631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 passed 00:07:14.852 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-18 14:09:06.863278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 passed 00:07:14.852 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-18 14:09:06.864944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 [2024-11-18 14:09:06.866289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 14:09:06.867696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 14:09:06.869109] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:14.852 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-18 14:09:06.872130] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 [2024-11-18 14:09:06.874713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 14:09:06.876173] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:14.852 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-18 14:09:06.879278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 [2024-11-18 14:09:06.880705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-11-18 14:09:06.883419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:14.852 Test: test_nvme_ctrlr_init_delay ...[2024-11-18 14:09:06.886521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 passed 00:07:14.852 Test: test_alloc_io_qpair_rr_1 ...[2024-11-18 14:09:06.888406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 [2024-11-18 14:09:06.888733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:14.852 [2024-11-18 14:09:06.889204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:14.852 [2024-11-18 14:09:06.889494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:14.852 [2024-11-18 14:09:06.889729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:14.852 passed 00:07:14.852 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:14.852 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:14.852 Test: test_alloc_io_qpair_wrr_1 ...[2024-11-18 14:09:06.890787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 passed 00:07:14.852 Test: test_alloc_io_qpair_wrr_2 ...[2024-11-18 14:09:06.891352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.852 [2024-11-18 14:09:06.891664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:14.852 passed 00:07:14.852 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-18 14:09:06.892231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:14.852 [2024-11-18 14:09:06.892507] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:14.852 [2024-11-18 14:09:06.892718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:14.852 [2024-11-18 14:09:06.892936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:14.852 passed 00:07:14.852 Test: test_nvme_ctrlr_fail ...[2024-11-18 14:09:06.893301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:14.852 passed 00:07:14.852 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:14.852 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:14.852 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:14.852 Test: test_nvme_ctrlr_test_active_ns ...[2024-11-18 14:09:06.894317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:15.420 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:15.420 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:15.420 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-18 14:09:07.217605] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-18 14:09:07.225212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-18 14:09:07.226786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 [2024-11-18 14:09:07.227000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:15.420 passed 00:07:15.420 Test: test_alloc_io_qpair_fail ...[2024-11-18 14:09:07.228573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 [2024-11-18 14:09:07.228853] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:15.420 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:15.420 Test: test_nvme_ctrlr_set_state ...[2024-11-18 14:09:07.229907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-18 14:09:07.230355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-18 14:09:07.253843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-18 14:09:07.296357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_reset ...[2024-11-18 14:09:07.298348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_aer_callback ...[2024-11-18 14:09:07.299008] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-18 14:09:07.301034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:15.420 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:15.420 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-18 14:09:07.303612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:15.420 Test: test_nvme_ctrlr_ana_resize ...[2024-11-18 14:09:07.305534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:15.420 Test: test_nvme_transport_ctrlr_ready ...[2024-11-18 14:09:07.307597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:15.420 [2024-11-18 14:09:07.307767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:15.420 passed 00:07:15.420 Test: test_nvme_ctrlr_disable ...[2024-11-18 14:09:07.308093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:15.420 passed 00:07:15.420 00:07:15.420 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.420 suites 1 1 n/a 0 0 00:07:15.420 tests 43 43 43 0 0 00:07:15.420 asserts 10418 10418 10418 0 n/a 00:07:15.420 00:07:15.420 Elapsed time = 0.397 seconds 00:07:15.420 14:09:07 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:15.420 00:07:15.420 00:07:15.420 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.420 http://cunit.sourceforge.net/ 00:07:15.420 00:07:15.420 00:07:15.420 Suite: nvme_ctrlr_cmd 00:07:15.420 Test: test_get_log_pages ...passed 00:07:15.420 Test: test_set_feature_cmd ...passed 00:07:15.421 Test: test_set_feature_ns_cmd ...passed 00:07:15.421 Test: test_get_feature_cmd ...passed 00:07:15.421 Test: test_get_feature_ns_cmd ...passed 00:07:15.421 Test: test_abort_cmd ...passed 00:07:15.421 Test: test_set_host_id_cmds ...[2024-11-18 14:09:07.350146] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:15.421 passed 00:07:15.421 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:15.421 Test: test_io_raw_cmd ...passed 00:07:15.421 Test: test_io_raw_cmd_with_md ...passed 00:07:15.421 Test: test_namespace_attach ...passed 00:07:15.421 Test: test_namespace_detach ...passed 00:07:15.421 Test: test_namespace_create ...passed 00:07:15.421 Test: test_namespace_delete ...passed 00:07:15.421 Test: test_doorbell_buffer_config ...passed 00:07:15.421 Test: test_format_nvme ...passed 00:07:15.421 Test: test_fw_commit ...passed 00:07:15.421 Test: test_fw_image_download ...passed 00:07:15.421 Test: test_sanitize ...passed 00:07:15.421 Test: test_directive ...passed 00:07:15.421 Test: test_nvme_request_add_abort ...passed 00:07:15.421 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:15.421 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:15.421 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:15.421 00:07:15.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.421 suites 1 1 n/a 0 0 00:07:15.421 tests 24 24 24 0 0 00:07:15.421 asserts 198 198 198 0 n/a 00:07:15.421 00:07:15.421 Elapsed time = 0.001 seconds 00:07:15.421 14:09:07 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:15.421 00:07:15.421 00:07:15.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.421 http://cunit.sourceforge.net/ 00:07:15.421 00:07:15.421 00:07:15.421 Suite: nvme_ctrlr_cmd 00:07:15.421 Test: test_geometry_cmd ...passed 00:07:15.421 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:15.421 00:07:15.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.421 suites 1 1 n/a 0 0 00:07:15.421 tests 2 2 2 0 0 00:07:15.421 asserts 7 7 7 0 n/a 00:07:15.421 00:07:15.421 Elapsed time = 0.000 seconds 00:07:15.421 14:09:07 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:15.421 00:07:15.421 00:07:15.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.421 http://cunit.sourceforge.net/ 00:07:15.421 00:07:15.421 00:07:15.421 Suite: nvme 00:07:15.421 Test: test_nvme_ns_construct ...passed 00:07:15.421 Test: test_nvme_ns_uuid ...passed 00:07:15.421 Test: test_nvme_ns_csi ...passed 00:07:15.421 Test: test_nvme_ns_data ...passed 00:07:15.421 Test: test_nvme_ns_set_identify_data ...passed 00:07:15.421 Test: test_spdk_nvme_ns_get_values ...passed 00:07:15.421 Test: test_spdk_nvme_ns_is_active ...passed 00:07:15.421 Test: spdk_nvme_ns_supports ...passed 00:07:15.421 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:15.421 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:15.421 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:15.421 Test: test_nvme_ns_find_id_desc ...passed 00:07:15.421 00:07:15.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.421 suites 1 1 n/a 0 0 00:07:15.421 tests 12 12 12 0 0 00:07:15.421 asserts 83 83 83 0 n/a 00:07:15.421 00:07:15.421 Elapsed time = 0.001 seconds 00:07:15.421 14:09:07 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:15.421 00:07:15.421 00:07:15.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.421 http://cunit.sourceforge.net/ 00:07:15.421 00:07:15.421 00:07:15.421 Suite: nvme_ns_cmd 00:07:15.421 Test: split_test ...passed 00:07:15.421 Test: split_test2 ...passed 00:07:15.421 Test: split_test3 ...passed 00:07:15.421 Test: split_test4 ...passed 00:07:15.421 Test: test_nvme_ns_cmd_flush ...passed 00:07:15.421 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:15.421 Test: test_nvme_ns_cmd_copy ...passed 00:07:15.421 Test: test_io_flags ...[2024-11-18 14:09:07.449031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:15.421 passed 00:07:15.421 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:15.421 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:15.421 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:15.421 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:15.421 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:15.421 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:15.421 Test: test_cmd_child_request ...passed 00:07:15.421 Test: test_nvme_ns_cmd_readv ...passed 00:07:15.421 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:15.421 Test: test_nvme_ns_cmd_writev ...[2024-11-18 14:09:07.451894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:15.421 passed 00:07:15.421 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:15.421 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:15.421 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:15.421 Test: test_nvme_ns_cmd_comparev ...passed 00:07:15.421 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:15.421 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:15.421 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:15.421 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:15.421 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:15.421 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-11-18 14:09:07.455336] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:15.421 passed 00:07:15.421 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-11-18 14:09:07.455729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:15.421 passed 00:07:15.421 Test: test_nvme_ns_cmd_verify ...passed 00:07:15.421 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:15.421 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:15.421 00:07:15.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.421 suites 1 1 n/a 0 0 00:07:15.421 tests 32 32 32 0 0 00:07:15.421 asserts 550 550 550 0 n/a 00:07:15.421 00:07:15.421 Elapsed time = 0.005 seconds 00:07:15.421 14:09:07 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:15.421 00:07:15.421 00:07:15.421 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.421 http://cunit.sourceforge.net/ 00:07:15.421 00:07:15.421 00:07:15.421 Suite: nvme_ns_cmd 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:15.421 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:15.421 00:07:15.421 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.421 suites 1 1 n/a 0 0 00:07:15.421 tests 12 12 12 0 0 00:07:15.421 asserts 123 123 123 0 n/a 00:07:15.421 00:07:15.421 Elapsed time = 0.001 seconds 00:07:15.680 14:09:07 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:15.680 00:07:15.680 00:07:15.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.680 http://cunit.sourceforge.net/ 00:07:15.680 00:07:15.680 00:07:15.680 Suite: nvme_qpair 00:07:15.680 Test: test3 ...passed 00:07:15.680 Test: test_ctrlr_failed ...passed 00:07:15.680 Test: struct_packing ...passed 00:07:15.680 Test: test_nvme_qpair_process_completions ...[2024-11-18 14:09:07.518664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:15.680 [2024-11-18 14:09:07.519087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:15.680 [2024-11-18 14:09:07.519305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:15.680 [2024-11-18 14:09:07.519559] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:15.680 passed 00:07:15.680 Test: test_nvme_completion_is_retry ...passed 00:07:15.680 Test: test_get_status_string ...passed 00:07:15.680 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:15.680 Test: test_nvme_qpair_submit_request ...passed 00:07:15.680 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:15.680 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:15.680 Test: test_nvme_qpair_init_deinit ...[2024-11-18 14:09:07.521366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:15.680 passed 00:07:15.680 Test: test_nvme_get_sgl_print_info ...passed 00:07:15.680 00:07:15.680 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.680 suites 1 1 n/a 0 0 00:07:15.680 tests 12 12 12 0 0 00:07:15.680 asserts 154 154 154 0 n/a 00:07:15.680 00:07:15.680 Elapsed time = 0.002 seconds 00:07:15.680 14:09:07 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:15.680 00:07:15.680 00:07:15.680 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.680 http://cunit.sourceforge.net/ 00:07:15.680 00:07:15.680 00:07:15.680 Suite: nvme_pcie 00:07:15.680 Test: test_prp_list_append ...[2024-11-18 14:09:07.554328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:15.680 [2024-11-18 14:09:07.554775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:15.680 [2024-11-18 14:09:07.554970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:15.681 [2024-11-18 14:09:07.555481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:15.681 [2024-11-18 14:09:07.555733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:15.681 passed 00:07:15.681 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:15.681 Test: test_shadow_doorbell_update ...passed 00:07:15.681 Test: test_build_contig_hw_sgl_request ...passed 00:07:15.681 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:15.681 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:15.681 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:15.681 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-11-18 14:09:07.556825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:15.681 passed 00:07:15.681 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:15.681 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:15.681 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-18 14:09:07.557632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:15.681 passed 00:07:15.681 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-11-18 14:09:07.558082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:15.681 passed 00:07:15.681 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-11-18 14:09:07.558474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:15.681 passed 00:07:15.681 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-11-18 14:09:07.558820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:15.681 passed 00:07:15.681 00:07:15.681 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.681 suites 1 1 n/a 0 0 00:07:15.681 tests 14 14 14 0 0 00:07:15.681 asserts 235 235 235 0 n/a 00:07:15.681 00:07:15.681 Elapsed time = 0.002 seconds 00:07:15.681 14:09:07 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:15.681 00:07:15.681 00:07:15.681 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.681 http://cunit.sourceforge.net/ 00:07:15.681 00:07:15.681 00:07:15.681 Suite: nvme_ns_cmd 00:07:15.681 Test: nvme_poll_group_create_test ...passed 00:07:15.681 Test: nvme_poll_group_add_remove_test ...passed 00:07:15.681 Test: nvme_poll_group_process_completions ...passed 00:07:15.681 Test: nvme_poll_group_destroy_test ...passed 00:07:15.681 Test: nvme_poll_group_get_free_stats ...passed 00:07:15.681 00:07:15.681 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.681 suites 1 1 n/a 0 0 00:07:15.681 tests 5 5 5 0 0 00:07:15.681 asserts 75 75 75 0 n/a 00:07:15.681 00:07:15.681 Elapsed time = 0.001 seconds 00:07:15.681 14:09:07 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:15.681 00:07:15.681 00:07:15.681 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.681 http://cunit.sourceforge.net/ 00:07:15.681 00:07:15.681 00:07:15.681 Suite: nvme_quirks 00:07:15.681 Test: test_nvme_quirks_striping ...passed 00:07:15.681 00:07:15.681 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.681 suites 1 1 n/a 0 0 00:07:15.681 tests 1 1 1 0 0 00:07:15.681 asserts 5 5 5 0 n/a 00:07:15.681 00:07:15.681 Elapsed time = 0.000 seconds 00:07:15.681 14:09:07 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:15.681 00:07:15.681 00:07:15.681 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.681 http://cunit.sourceforge.net/ 00:07:15.681 00:07:15.681 00:07:15.681 Suite: nvme_tcp 00:07:15.681 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:15.681 Test: test_nvme_tcp_build_iovs ...passed 00:07:15.681 Test: test_nvme_tcp_build_sgl_request ...[2024-11-18 14:09:07.651753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fffe9c05100, and the iovcnt=16, remaining_size=28672 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:15.681 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:15.681 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:15.681 Test: test_nvme_tcp_req_get ...passed 00:07:15.681 Test: test_nvme_tcp_req_init ...passed 00:07:15.681 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:15.681 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:15.681 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-11-18 14:09:07.653652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06e20 is same with the state(6) to be set 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_alloc_reqs ...passed 00:07:15.681 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-11-18 14:09:07.654384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c05fb0 is same with the state(5) to be set 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-18 14:09:07.654758] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fffe9c06ae0 00:07:15.681 [2024-11-18 14:09:07.654929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:15.681 [2024-11-18 14:09:07.655136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.655342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:15.681 [2024-11-18 14:09:07.655551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.655727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:15.681 [2024-11-18 14:09:07.655915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.656083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.656262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.656446] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.656611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 passed[2024-11-18 14:09:07.656718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c06470 is same with the state(5) to be set 00:07:15.681 00:07:15.681 Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-18 14:09:07.657060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:15.681 [2024-11-18 14:09:07.657284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:15.681 [2024-11-18 14:09:07.657676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:15.681 Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-18 14:09:07.658231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fffe9c06620): PDU Sequence Error 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_icresp_handle ...[2024-11-18 14:09:07.658680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:15.681 [2024-11-18 14:09:07.658850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:15.681 [2024-11-18 14:09:07.659026] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c05fc0 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.659222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:15.681 [2024-11-18 14:09:07.659387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c05fc0 is same with the state(5) to be set 00:07:15.681 [2024-11-18 14:09:07.659573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c05fc0 is same with the state(0) to be set 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-18 14:09:07.659962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fffe9c06ae0): PDU Sequence Error 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-18 14:09:07.660363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fffe9c052a0 00:07:15.681 passed 00:07:15.681 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:15.682 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-18 14:09:07.661006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fffe9c04920, errno=0, rc=0 00:07:15.682 [2024-11-18 14:09:07.661218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c04920 is same with the state(5) to be set 00:07:15.682 [2024-11-18 14:09:07.661402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fffe9c04920 is same with the state(5) to be set 00:07:15.682 [2024-11-18 14:09:07.661582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fffe9c04920 (0): Success 00:07:15.682 [2024-11-18 14:09:07.661752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fffe9c04920 (0): Success 00:07:15.682 passed 00:07:15.940 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-18 14:09:07.782979] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:15.940 [2024-11-18 14:09:07.783430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:15.940 passed 00:07:15.940 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:15.940 Test: test_nvme_tcp_poll_group_get_stats ...[2024-11-18 14:09:07.784094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:15.940 [2024-11-18 14:09:07.784285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:15.940 passed 00:07:15.940 Test: test_nvme_tcp_ctrlr_construct ...[2024-11-18 14:09:07.784824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:15.940 [2024-11-18 14:09:07.785022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:15.940 [2024-11-18 14:09:07.785242] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:15.940 [2024-11-18 14:09:07.785448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:15.940 [2024-11-18 14:09:07.785706] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:07:15.940 [2024-11-18 14:09:07.785914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:15.940 passed 00:07:15.940 Test: test_nvme_tcp_qpair_submit_request ...[2024-11-18 14:09:07.786363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:07:15.940 [2024-11-18 14:09:07.786530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:15.940 passed 00:07:15.940 00:07:15.940 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.940 suites 1 1 n/a 0 0 00:07:15.940 tests 27 27 27 0 0 00:07:15.940 asserts 624 624 624 0 n/a 00:07:15.940 00:07:15.940 Elapsed time = 0.128 seconds 00:07:15.940 14:09:07 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:15.940 00:07:15.940 00:07:15.940 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.940 http://cunit.sourceforge.net/ 00:07:15.940 00:07:15.940 00:07:15.940 Suite: nvme_transport 00:07:15.940 Test: test_nvme_get_transport ...passed 00:07:15.940 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:15.940 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:15.940 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:15.940 Test: test_ctrlr_get_memory_domains ...passed 00:07:15.940 00:07:15.940 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.940 suites 1 1 n/a 0 0 00:07:15.940 tests 5 5 5 0 0 00:07:15.940 asserts 28 28 28 0 n/a 00:07:15.940 00:07:15.940 Elapsed time = 0.000 seconds 00:07:15.940 14:09:07 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:15.940 00:07:15.940 00:07:15.940 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.940 http://cunit.sourceforge.net/ 00:07:15.940 00:07:15.940 00:07:15.940 Suite: nvme_io_msg 00:07:15.940 Test: test_nvme_io_msg_send ...passed 00:07:15.940 Test: test_nvme_io_msg_process ...passed 00:07:15.940 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:15.940 00:07:15.941 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.941 suites 1 1 n/a 0 0 00:07:15.941 tests 3 3 3 0 0 00:07:15.941 asserts 56 56 56 0 n/a 00:07:15.941 00:07:15.941 Elapsed time = 0.000 seconds 00:07:15.941 14:09:07 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:15.941 00:07:15.941 00:07:15.941 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.941 http://cunit.sourceforge.net/ 00:07:15.941 00:07:15.941 00:07:15.941 Suite: nvme_pcie_common 00:07:15.941 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-11-18 14:09:07.895700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:15.941 passed 00:07:15.941 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:15.941 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:15.941 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-18 14:09:07.897028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:15.941 [2024-11-18 14:09:07.897299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:15.941 [2024-11-18 14:09:07.897470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:15.941 passed 00:07:15.941 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:15.941 Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-18 14:09:07.898271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:15.941 [2024-11-18 14:09:07.898452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:15.941 passed 00:07:15.941 00:07:15.941 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.941 suites 1 1 n/a 0 0 00:07:15.941 tests 6 6 6 0 0 00:07:15.941 asserts 148 148 148 0 n/a 00:07:15.941 00:07:15.941 Elapsed time = 0.002 seconds 00:07:15.941 14:09:07 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:15.941 00:07:15.941 00:07:15.941 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.941 http://cunit.sourceforge.net/ 00:07:15.941 00:07:15.941 00:07:15.941 Suite: nvme_fabric 00:07:15.941 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:15.941 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:15.941 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:15.941 Test: test_nvme_fabric_discover_probe ...passed 00:07:15.941 Test: test_nvme_fabric_qpair_connect ...[2024-11-18 14:09:07.926883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:15.941 passed 00:07:15.941 00:07:15.941 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.941 suites 1 1 n/a 0 0 00:07:15.941 tests 5 5 5 0 0 00:07:15.941 asserts 60 60 60 0 n/a 00:07:15.941 00:07:15.941 Elapsed time = 0.001 seconds 00:07:15.941 14:09:07 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:15.941 00:07:15.941 00:07:15.941 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.941 http://cunit.sourceforge.net/ 00:07:15.941 00:07:15.941 00:07:15.941 Suite: nvme_opal 00:07:15.941 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:15.941 Test: test_opal_add_short_atom_header ...[2024-11-18 14:09:07.956923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:15.941 passed 00:07:15.941 00:07:15.941 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.941 suites 1 1 n/a 0 0 00:07:15.941 tests 2 2 2 0 0 00:07:15.941 asserts 22 22 22 0 n/a 00:07:15.941 00:07:15.941 Elapsed time = 0.000 seconds 00:07:15.941 00:07:15.941 real 0m1.286s 00:07:15.941 user 0m0.655s 00:07:15.941 sys 0m0.424s 00:07:15.941 14:09:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.941 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:07:15.941 ************************************ 00:07:15.941 END TEST unittest_nvme 00:07:15.941 ************************************ 00:07:16.199 14:09:08 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:16.199 14:09:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.199 14:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.199 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:07:16.199 ************************************ 00:07:16.199 START TEST unittest_log 00:07:16.199 ************************************ 00:07:16.199 14:09:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:16.199 00:07:16.199 00:07:16.199 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.199 http://cunit.sourceforge.net/ 00:07:16.199 00:07:16.199 00:07:16.199 Suite: log 00:07:16.199 Test: log_test ...[2024-11-18 14:09:08.051351] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:16.199 [2024-11-18 14:09:08.051791] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:16.199 log dump test: 00:07:16.199 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:16.199 spdk dump test: 00:07:16.199 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:16.199 spdk dump test: 00:07:16.199 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:16.199 00000010 65 20 63 68 61 72 73 e chars 00:07:16.199 passed 00:07:17.135 Test: deprecation ...passed 00:07:17.135 00:07:17.135 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.135 suites 1 1 n/a 0 0 00:07:17.135 tests 2 2 2 0 0 00:07:17.136 asserts 73 73 73 0 n/a 00:07:17.136 00:07:17.136 Elapsed time = 0.001 seconds 00:07:17.136 00:07:17.136 real 0m1.035s 00:07:17.136 user 0m0.021s 00:07:17.136 sys 0m0.013s 00:07:17.136 ************************************ 00:07:17.136 END TEST unittest_log 00:07:17.136 ************************************ 00:07:17.136 14:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.136 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 14:09:09 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:17.136 14:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.136 14:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.136 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 ************************************ 00:07:17.136 START TEST unittest_lvol 00:07:17.136 ************************************ 00:07:17.136 14:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:17.136 00:07:17.136 00:07:17.136 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.136 http://cunit.sourceforge.net/ 00:07:17.136 00:07:17.136 00:07:17.136 Suite: lvol 00:07:17.136 Test: lvs_init_unload_success ...[2024-11-18 14:09:09.152552] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:17.136 passed 00:07:17.136 Test: lvs_init_destroy_success ...[2024-11-18 14:09:09.153490] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:17.136 passed 00:07:17.136 Test: lvs_init_opts_success ...passed 00:07:17.136 Test: lvs_unload_lvs_is_null_fail ...[2024-11-18 14:09:09.154237] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:17.136 passed 00:07:17.136 Test: lvs_names ...[2024-11-18 14:09:09.154609] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:17.136 [2024-11-18 14:09:09.154811] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:17.136 [2024-11-18 14:09:09.155117] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:17.136 passed 00:07:17.136 Test: lvol_create_destroy_success ...passed 00:07:17.136 Test: lvol_create_fail ...[2024-11-18 14:09:09.156464] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:17.136 [2024-11-18 14:09:09.156697] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:17.136 passed 00:07:17.136 Test: lvol_destroy_fail ...[2024-11-18 14:09:09.157376] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:17.136 passed 00:07:17.136 Test: lvol_close ...[2024-11-18 14:09:09.157934] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:17.136 [2024-11-18 14:09:09.158119] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:17.136 passed 00:07:17.136 Test: lvol_resize ...passed 00:07:17.136 Test: lvol_set_read_only ...passed 00:07:17.136 Test: test_lvs_load ...[2024-11-18 14:09:09.159623] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:17.136 [2024-11-18 14:09:09.159807] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:17.136 passed 00:07:17.136 Test: lvols_load ...[2024-11-18 14:09:09.160341] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:17.136 [2024-11-18 14:09:09.160581] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:17.136 passed 00:07:17.136 Test: lvol_open ...passed 00:07:17.136 Test: lvol_snapshot ...passed 00:07:17.136 Test: lvol_snapshot_fail ...[2024-11-18 14:09:09.161981] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:17.136 passed 00:07:17.136 Test: lvol_clone ...passed 00:07:17.136 Test: lvol_clone_fail ...[2024-11-18 14:09:09.163053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:17.136 passed 00:07:17.136 Test: lvol_iter_clones ...passed 00:07:17.136 Test: lvol_refcnt ...[2024-11-18 14:09:09.164141] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 2c5c3c69-7900-45d8-8f51-fbab221e1d4d because it is still open 00:07:17.136 passed 00:07:17.136 Test: lvol_names ...[2024-11-18 14:09:09.164702] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:17.136 [2024-11-18 14:09:09.164941] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:17.136 [2024-11-18 14:09:09.165319] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:17.136 passed 00:07:17.136 Test: lvol_create_thin_provisioned ...passed 00:07:17.136 Test: lvol_rename ...[2024-11-18 14:09:09.166256] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:17.136 [2024-11-18 14:09:09.166498] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:17.136 passed 00:07:17.136 Test: lvs_rename ...[2024-11-18 14:09:09.167007] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:17.136 passed 00:07:17.136 Test: lvol_inflate ...[2024-11-18 14:09:09.167605] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:17.136 passed 00:07:17.136 Test: lvol_decouple_parent ...[2024-11-18 14:09:09.168191] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:17.136 passed 00:07:17.136 Test: lvol_get_xattr ...passed 00:07:17.136 Test: lvol_esnap_reload ...passed 00:07:17.136 Test: lvol_esnap_create_bad_args ...[2024-11-18 14:09:09.169344] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:17.136 [2024-11-18 14:09:09.169501] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:17.136 [2024-11-18 14:09:09.169663] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:17.136 [2024-11-18 14:09:09.169927] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:17.136 [2024-11-18 14:09:09.170204] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:17.136 passed 00:07:17.136 Test: lvol_esnap_create_delete ...passed 00:07:17.136 Test: lvol_esnap_load_esnaps ...[2024-11-18 14:09:09.170998] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:17.136 passed 00:07:17.136 Test: lvol_esnap_missing ...[2024-11-18 14:09:09.171435] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:17.136 [2024-11-18 14:09:09.171601] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:17.136 passed 00:07:17.136 Test: lvol_esnap_hotplug ... 00:07:17.136 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:17.136 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:17.136 [2024-11-18 14:09:09.172857] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol b9c12fe5-4ad7-4a6e-98e7-d8648e36163d: failed to create esnap bs_dev: error -12 00:07:17.136 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:17.136 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:17.136 [2024-11-18 14:09:09.173432] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol c678182d-8e8f-479e-be68-6970c47e00b0: failed to create esnap bs_dev: error -12 00:07:17.136 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:17.136 [2024-11-18 14:09:09.173796] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 06cc8839-749b-4115-b379-5d825da68114: failed to create esnap bs_dev: error -12 00:07:17.136 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:17.136 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:17.136 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:17.136 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:17.136 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:17.136 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:17.137 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:17.137 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:17.137 passed 00:07:17.137 Test: lvol_get_by ...passed 00:07:17.137 00:07:17.137 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.137 suites 1 1 n/a 0 0 00:07:17.137 tests 34 34 34 0 0 00:07:17.137 asserts 1439 1439 1439 0 n/a 00:07:17.137 00:07:17.137 Elapsed time = 0.014 seconds 00:07:17.137 00:07:17.137 real 0m0.063s 00:07:17.137 user 0m0.028s 00:07:17.137 sys 0m0.025s 00:07:17.137 14:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.137 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.137 ************************************ 00:07:17.137 END TEST unittest_lvol 00:07:17.137 ************************************ 00:07:17.395 14:09:09 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.395 14:09:09 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:17.395 14:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.395 14:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.395 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 ************************************ 00:07:17.395 START TEST unittest_nvme_rdma 00:07:17.395 ************************************ 00:07:17.395 14:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:17.395 00:07:17.395 00:07:17.395 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.395 http://cunit.sourceforge.net/ 00:07:17.395 00:07:17.395 00:07:17.395 Suite: nvme_rdma 00:07:17.395 Test: test_nvme_rdma_build_sgl_request ...[2024-11-18 14:09:09.269532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:17.395 [2024-11-18 14:09:09.270071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:17.395 [2024-11-18 14:09:09.270349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:17.395 Test: test_nvme_rdma_build_contig_request ...[2024-11-18 14:09:09.270933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:17.395 Test: test_nvme_rdma_create_reqs ...[2024-11-18 14:09:09.271599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_create_rsps ...[2024-11-18 14:09:09.272233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-11-18 14:09:09.272754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:17.395 [2024-11-18 14:09:09.272974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_poller_create ...passed 00:07:17.395 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-11-18 14:09:09.273636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_ctrlr_construct ...passed 00:07:17.395 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:17.395 Test: test_nvme_rdma_req_init ...passed 00:07:17.395 Test: test_nvme_rdma_validate_cm_event ...[2024-11-18 14:09:09.274689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:17.395 [2024-11-18 14:09:09.274852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_qpair_init ...passed 00:07:17.395 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:17.395 Test: test_nvme_rdma_memory_domain ...[2024-11-18 14:09:09.275669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:17.395 passed 00:07:17.395 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:17.395 Test: test_rdma_get_memory_translation ...[2024-11-18 14:09:09.276085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:17.395 [2024-11-18 14:09:09.276391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:17.395 passed 00:07:17.395 Test: test_get_rdma_qpair_from_wc ...passed 00:07:17.395 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:17.395 Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-18 14:09:09.276991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:17.395 [2024-11-18 14:09:09.277242] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:17.395 passed 00:07:17.395 Test: test_nvme_rdma_qpair_set_poller ...[2024-11-18 14:09:09.277770] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:17.395 [2024-11-18 14:09:09.277965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:17.395 [2024-11-18 14:09:09.278129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe490405a0 on poll group 0x60b0000001a0 00:07:17.395 [2024-11-18 14:09:09.278317] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:17.395 [2024-11-18 14:09:09.278485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:17.395 [2024-11-18 14:09:09.278652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe490405a0 on poll group 0x60b0000001a0 00:07:17.395 [2024-11-18 14:09:09.278867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:17.395 passed 00:07:17.395 00:07:17.395 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.395 suites 1 1 n/a 0 0 00:07:17.395 tests 22 22 22 0 0 00:07:17.395 asserts 412 412 412 0 n/a 00:07:17.395 00:07:17.395 Elapsed time = 0.005 seconds 00:07:17.395 00:07:17.395 real 0m0.041s 00:07:17.395 user 0m0.010s 00:07:17.395 sys 0m0.025s 00:07:17.395 14:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.395 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 ************************************ 00:07:17.395 END TEST unittest_nvme_rdma 00:07:17.395 ************************************ 00:07:17.395 14:09:09 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:17.395 14:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.395 14:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.395 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.395 ************************************ 00:07:17.395 START TEST unittest_nvmf_transport 00:07:17.395 ************************************ 00:07:17.395 14:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:17.395 00:07:17.395 00:07:17.395 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.395 http://cunit.sourceforge.net/ 00:07:17.395 00:07:17.395 00:07:17.395 Suite: nvmf 00:07:17.395 Test: test_spdk_nvmf_transport_create ...[2024-11-18 14:09:09.367181] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:17.395 [2024-11-18 14:09:09.367683] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:17.395 [2024-11-18 14:09:09.367895] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:17.395 [2024-11-18 14:09:09.368158] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:17.395 passed 00:07:17.395 Test: test_nvmf_transport_poll_group_create ...passed 00:07:17.396 Test: test_spdk_nvmf_transport_opts_init ...[2024-11-18 14:09:09.368920] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:17.396 [2024-11-18 14:09:09.369148] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:17.396 [2024-11-18 14:09:09.369311] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:17.396 passed 00:07:17.396 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:17.396 00:07:17.396 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.396 suites 1 1 n/a 0 0 00:07:17.396 tests 4 4 4 0 0 00:07:17.396 asserts 49 49 49 0 n/a 00:07:17.396 00:07:17.396 Elapsed time = 0.002 seconds 00:07:17.396 00:07:17.396 real 0m0.042s 00:07:17.396 user 0m0.025s 00:07:17.396 sys 0m0.017s 00:07:17.396 14:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.396 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.396 ************************************ 00:07:17.396 END TEST unittest_nvmf_transport 00:07:17.396 ************************************ 00:07:17.396 14:09:09 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:17.396 14:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.396 14:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.396 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.396 ************************************ 00:07:17.396 START TEST unittest_rdma 00:07:17.396 ************************************ 00:07:17.396 14:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:17.396 00:07:17.396 00:07:17.396 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.396 http://cunit.sourceforge.net/ 00:07:17.396 00:07:17.396 00:07:17.396 Suite: rdma_common 00:07:17.396 Test: test_spdk_rdma_pd ...[2024-11-18 14:09:09.456611] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:17.396 passed 00:07:17.396 00:07:17.396 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.396 suites 1 1 n/a 0 0 00:07:17.396 tests 1 1 1 0 0 00:07:17.396 asserts 31 31 31 0 n/a 00:07:17.396 00:07:17.396 Elapsed time = 0.001 seconds 00:07:17.396 [2024-11-18 14:09:09.457086] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:17.654 00:07:17.654 real 0m0.031s 00:07:17.654 user 0m0.026s 00:07:17.654 sys 0m0.005s 00:07:17.654 14:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.654 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 ************************************ 00:07:17.654 END TEST unittest_rdma 00:07:17.654 ************************************ 00:07:17.654 14:09:09 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.654 14:09:09 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:17.654 14:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.654 14:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.654 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 ************************************ 00:07:17.654 START TEST unittest_nvme_cuse 00:07:17.654 ************************************ 00:07:17.654 14:09:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:17.654 00:07:17.654 00:07:17.654 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.654 http://cunit.sourceforge.net/ 00:07:17.654 00:07:17.654 00:07:17.654 Suite: nvme_cuse 00:07:17.654 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:17.654 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:17.654 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:17.654 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:17.654 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:17.654 Test: test_cuse_nvme_submit_io ...[2024-11-18 14:09:09.546494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:17.654 passed 00:07:17.654 Test: test_cuse_nvme_reset ...passed 00:07:17.654 Test: test_nvme_cuse_stop ...passed 00:07:17.654 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:17.654 00:07:17.654 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.654 suites 1 1 n/a 0 0 00:07:17.654 tests 9 9 9 0 0 00:07:17.654 asserts 121 121 121 0 n/a 00:07:17.654 00:07:17.654 Elapsed time = 0.002 seconds 00:07:17.654 [2024-11-18 14:09:09.546793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:17.654 00:07:17.654 real 0m0.033s 00:07:17.654 user 0m0.017s 00:07:17.654 sys 0m0.016s 00:07:17.654 14:09:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.654 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 ************************************ 00:07:17.654 END TEST unittest_nvme_cuse 00:07:17.654 ************************************ 00:07:17.654 14:09:09 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:07:17.654 14:09:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.654 14:09:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.654 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.654 ************************************ 00:07:17.654 START TEST unittest_nvmf 00:07:17.654 ************************************ 00:07:17.654 14:09:09 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:07:17.654 14:09:09 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:17.654 00:07:17.654 00:07:17.654 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.654 http://cunit.sourceforge.net/ 00:07:17.655 00:07:17.655 00:07:17.655 Suite: nvmf 00:07:17.655 Test: test_get_log_page ...[2024-11-18 14:09:09.636098] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:17.655 passed 00:07:17.655 Test: test_process_fabrics_cmd ...passed 00:07:17.655 Test: test_connect ...passed 00:07:17.655 Test: test_get_ns_id_desc_list ...passed 00:07:17.655 Test: test_identify_ns ...[2024-11-18 14:09:09.636918] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:17.655 [2024-11-18 14:09:09.637031] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:17.655 [2024-11-18 14:09:09.637094] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:17.655 [2024-11-18 14:09:09.637153] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:17.655 [2024-11-18 14:09:09.637248] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:17.655 [2024-11-18 14:09:09.637288] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:17.655 [2024-11-18 14:09:09.637400] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:17.655 [2024-11-18 14:09:09.637447] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:17.655 [2024-11-18 14:09:09.637569] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:17.655 [2024-11-18 14:09:09.637646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:17.655 [2024-11-18 14:09:09.637908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:17.655 [2024-11-18 14:09:09.637999] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:17.655 [2024-11-18 14:09:09.638095] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:17.655 [2024-11-18 14:09:09.638172] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:17.655 [2024-11-18 14:09:09.638264] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:17.655 [2024-11-18 14:09:09.638391] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:17.655 [2024-11-18 14:09:09.638602] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.655 [2024-11-18 14:09:09.638825] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:17.655 passed 00:07:17.655 Test: test_identify_ns_iocs_specific ...[2024-11-18 14:09:09.638973] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:17.655 [2024-11-18 14:09:09.639115] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.655 [2024-11-18 14:09:09.639492] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.655 passed 00:07:17.655 Test: test_reservation_write_exclusive ...passed 00:07:17.655 Test: test_reservation_exclusive_access ...passed 00:07:17.655 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:17.655 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:17.655 Test: test_reservation_notification_log_page ...passed 00:07:17.655 Test: test_get_dif_ctx ...passed 00:07:17.655 Test: test_set_get_features ...[2024-11-18 14:09:09.639990] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:17.655 passed 00:07:17.655 Test: test_identify_ctrlr ...passed 00:07:17.655 Test: test_identify_ctrlr_iocs_specific ...[2024-11-18 14:09:09.640046] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:17.655 [2024-11-18 14:09:09.640099] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:17.655 [2024-11-18 14:09:09.640164] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:17.655 passed 00:07:17.655 Test: test_custom_admin_cmd ...passed 00:07:17.655 Test: test_fused_compare_and_write ...passed 00:07:17.655 Test: test_multi_async_event_reqs ...passed 00:07:17.655 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:17.655 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:17.655 Test: test_multi_async_events ...passed 00:07:17.655 Test: test_rae ...passed 00:07:17.655 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:17.655 Test: test_nvmf_ctrlr_use_zcopy ...[2024-11-18 14:09:09.640605] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:17.655 [2024-11-18 14:09:09.640664] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:17.655 [2024-11-18 14:09:09.640720] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:17.655 passed 00:07:17.655 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:07:17.655 Test: test_zcopy_read ...passed 00:07:17.655 Test: test_zcopy_write ...passed 00:07:17.655 Test: test_nvmf_property_set ...passed 00:07:17.655 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:07:17.655 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:07:17.655 00:07:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.655 suites 1 1 n/a 0 0 00:07:17.655 tests 30 30 30 0 0 00:07:17.655 asserts 885 885 885 0 n/a 00:07:17.655 00:07:17.655 Elapsed time = 0.006 seconds 00:07:17.655 [2024-11-18 14:09:09.641213] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:17.655 [2024-11-18 14:09:09.641402] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:17.655 [2024-11-18 14:09:09.641495] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:17.655 [2024-11-18 14:09:09.641551] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:17.655 [2024-11-18 14:09:09.641599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:17.655 [2024-11-18 14:09:09.641639] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:17.655 14:09:09 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:17.655 00:07:17.655 00:07:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.655 http://cunit.sourceforge.net/ 00:07:17.655 00:07:17.655 00:07:17.655 Suite: nvmf 00:07:17.655 Test: test_get_rw_params ...passed 00:07:17.655 Test: test_lba_in_range ...passed 00:07:17.655 Test: test_get_dif_ctx ...passed 00:07:17.655 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:17.655 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-11-18 14:09:09.672204] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:17.655 passed 00:07:17.655 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:07:17.655 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:07:17.655 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:17.655 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:17.655 00:07:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.655 suites 1 1 n/a 0 0 00:07:17.655 tests 9 9 9 0 0 00:07:17.655 asserts 157 157 157 0 n/a 00:07:17.655 00:07:17.655 Elapsed time = 0.001 seconds 00:07:17.655 [2024-11-18 14:09:09.672442] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:17.655 [2024-11-18 14:09:09.672534] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:17.655 [2024-11-18 14:09:09.672594] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:17.655 [2024-11-18 14:09:09.672673] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:17.655 [2024-11-18 14:09:09.672772] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:17.655 [2024-11-18 14:09:09.672813] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:17.655 [2024-11-18 14:09:09.672916] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:17.655 [2024-11-18 14:09:09.672955] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:17.655 14:09:09 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:17.655 00:07:17.656 00:07:17.656 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.656 http://cunit.sourceforge.net/ 00:07:17.656 00:07:17.656 00:07:17.656 Suite: nvmf 00:07:17.656 Test: test_discovery_log ...passed 00:07:17.656 Test: test_discovery_log_with_filters ...passed 00:07:17.656 00:07:17.656 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.656 suites 1 1 n/a 0 0 00:07:17.656 tests 2 2 2 0 0 00:07:17.656 asserts 238 238 238 0 n/a 00:07:17.656 00:07:17.656 Elapsed time = 0.003 seconds 00:07:17.915 14:09:09 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:17.915 00:07:17.915 00:07:17.915 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.915 http://cunit.sourceforge.net/ 00:07:17.915 00:07:17.915 00:07:17.915 Suite: nvmf 00:07:17.915 Test: nvmf_test_create_subsystem ...[2024-11-18 14:09:09.744412] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:17.915 passed 00:07:17.915 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:07:17.915 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:17.915 Test: test_reservation_register ...passed 00:07:17.915 Test: test_reservation_register_with_ptpl ...passed 00:07:17.915 Test: test_reservation_acquire_preempt_1 ...[2024-11-18 14:09:09.744725] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:17.915 [2024-11-18 14:09:09.744822] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:17.915 [2024-11-18 14:09:09.744885] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:17.915 [2024-11-18 14:09:09.744920] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:17.915 [2024-11-18 14:09:09.744967] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:17.915 [2024-11-18 14:09:09.745077] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:17.915 [2024-11-18 14:09:09.745236] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:17.915 [2024-11-18 14:09:09.745337] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:17.915 [2024-11-18 14:09:09.745377] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:17.915 [2024-11-18 14:09:09.745412] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:17.915 [2024-11-18 14:09:09.745582] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:17.915 [2024-11-18 14:09:09.745709] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:17.915 [2024-11-18 14:09:09.746028] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.915 [2024-11-18 14:09:09.746158] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:17.916 [2024-11-18 14:09:09.747183] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 passed 00:07:17.916 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:17.916 Test: test_reservation_release ...passed 00:07:17.916 Test: test_reservation_unregister_notification ...[2024-11-18 14:09:09.749475] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 [2024-11-18 14:09:09.749918] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 passed 00:07:17.916 Test: test_reservation_release_notification ...[2024-11-18 14:09:09.750528] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 passed 00:07:17.916 Test: test_reservation_release_notification_write_exclusive ...[2024-11-18 14:09:09.750880] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 passed 00:07:17.916 Test: test_reservation_clear_notification ...[2024-11-18 14:09:09.751208] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 passed 00:07:17.916 Test: test_reservation_preempt_notification ...[2024-11-18 14:09:09.751734] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:17.916 passed 00:07:17.916 Test: test_spdk_nvmf_ns_event ...passed 00:07:17.916 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:17.916 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:17.916 Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-18 14:09:09.753109] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:17.916 [2024-11-18 14:09:09.753214] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:17.916 [2024-11-18 14:09:09.753363] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:17.916 [2024-11-18 14:09:09.753444] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:17.916 [2024-11-18 14:09:09.753492] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:46e8a991-9ce4-45ae-a92a-81531aded3e": uuid is not the correct length 00:07:17.916 [2024-11-18 14:09:09.753549] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:17.916 [2024-11-18 14:09:09.753665] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:17.916 passed 00:07:17.916 Test: test_nvmf_ns_reservation_report ...passed 00:07:17.916 Test: test_nvmf_nqn_is_valid ...passed 00:07:17.916 Test: test_nvmf_ns_reservation_restore ...passed 00:07:17.916 Test: test_nvmf_subsystem_state_change ...passed 00:07:17.916 Test: test_nvmf_reservation_custom_ops ...passed 00:07:17.916 00:07:17.916 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.916 suites 1 1 n/a 0 0 00:07:17.916 tests 22 22 22 0 0 00:07:17.916 asserts 407 407 407 0 n/a 00:07:17.916 00:07:17.916 Elapsed time = 0.011 seconds 00:07:17.916 14:09:09 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:17.916 00:07:17.916 00:07:17.916 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.916 http://cunit.sourceforge.net/ 00:07:17.916 00:07:17.916 00:07:17.916 Suite: nvmf 00:07:17.916 Test: test_nvmf_tcp_create ...[2024-11-18 14:09:09.820176] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:17.916 passed 00:07:17.916 Test: test_nvmf_tcp_destroy ...passed 00:07:17.916 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:17.916 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:17.916 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:17.916 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:17.916 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:17.916 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-18 14:09:09.922616] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 passed 00:07:17.916 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:17.916 Test: test_nvmf_tcp_icreq_handle ...passed 00:07:17.916 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:17.916 Test: test_nvmf_tcp_invalid_sgl ...passed 00:07:17.916 Test: test_nvmf_tcp_pdu_ch_handle ...passed 00:07:17.916 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-18 14:09:09.922715] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.922832] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.922885] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.922925] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923021] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:17.916 [2024-11-18 14:09:09.923121] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.923206] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923248] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:17.916 [2024-11-18 14:09:09.923299] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923338] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.923382] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923425] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.923488] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923565] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:17.916 [2024-11-18 14:09:09.923618] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.923657] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf3500 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923709] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc9bbf4260 00:07:17.916 [2024-11-18 14:09:09.923826] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.923889] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.923942] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc9bbf39c0 00:07:17.916 [2024-11-18 14:09:09.923985] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.924029] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.924069] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:17.916 [2024-11-18 14:09:09.924112] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.924170] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.924215] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:17.916 [2024-11-18 14:09:09.924256] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.924302] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.924344] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.916 [2024-11-18 14:09:09.924388] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.916 [2024-11-18 14:09:09.924454] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.917 [2024-11-18 14:09:09.924494] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.917 [2024-11-18 14:09:09.924563] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.917 [2024-11-18 14:09:09.924597] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.917 [2024-11-18 14:09:09.924645] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.917 [2024-11-18 14:09:09.924682] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.917 [2024-11-18 14:09:09.924751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.917 [2024-11-18 14:09:09.924787] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.917 [2024-11-18 14:09:09.924852] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:17.917 [2024-11-18 14:09:09.924915] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc9bbf39c0 is same with the state(5) to be set 00:07:17.917 passed 00:07:17.917 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:07:17.917 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-18 14:09:09.949050] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:17.917 [2024-11-18 14:09:09.949133] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:17.917 [2024-11-18 14:09:09.949535] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:17.917 passed 00:07:17.917 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:07:17.917 00:07:17.917 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.917 suites 1 1 n/a 0 0 00:07:17.917 tests 17 17 17 0 0 00:07:17.917 asserts 222 222 222 0 n/a 00:07:17.917 00:07:17.917 Elapsed time = 0.153 seconds 00:07:17.917 [2024-11-18 14:09:09.949596] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:17.917 [2024-11-18 14:09:09.949849] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:17.917 [2024-11-18 14:09:09.949908] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:18.176 14:09:10 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:18.176 00:07:18.176 00:07:18.176 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.176 http://cunit.sourceforge.net/ 00:07:18.176 00:07:18.176 00:07:18.176 Suite: nvmf 00:07:18.176 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:18.176 00:07:18.176 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.176 suites 1 1 n/a 0 0 00:07:18.176 tests 1 1 1 0 0 00:07:18.176 asserts 17 17 17 0 n/a 00:07:18.176 00:07:18.176 Elapsed time = 0.023 seconds 00:07:18.176 00:07:18.176 real 0m0.490s 00:07:18.176 user 0m0.246s 00:07:18.176 sys 0m0.245s 00:07:18.176 14:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.176 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.176 ************************************ 00:07:18.176 END TEST unittest_nvmf 00:07:18.176 ************************************ 00:07:18.176 14:09:10 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.176 14:09:10 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.176 14:09:10 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:18.176 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.176 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.176 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.176 ************************************ 00:07:18.176 START TEST unittest_nvmf_rdma 00:07:18.176 ************************************ 00:07:18.176 14:09:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:18.176 00:07:18.176 00:07:18.176 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.176 http://cunit.sourceforge.net/ 00:07:18.176 00:07:18.176 00:07:18.176 Suite: nvmf 00:07:18.176 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-18 14:09:10.193517] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:18.176 [2024-11-18 14:09:10.193863] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:18.176 [2024-11-18 14:09:10.193936] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:18.176 passed 00:07:18.176 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:18.176 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:18.176 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:18.176 Test: test_nvmf_rdma_opts_init ...passed 00:07:18.176 Test: test_nvmf_rdma_request_free_data ...passed 00:07:18.176 Test: test_nvmf_rdma_update_ibv_state ...[2024-11-18 14:09:10.195286] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:18.176 passed 00:07:18.176 Test: test_nvmf_rdma_resources_create ...[2024-11-18 14:09:10.195353] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:18.176 passed 00:07:18.176 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:18.176 Test: test_nvmf_rdma_resize_cq ...[2024-11-18 14:09:10.196737] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:18.176 Using CQ of insufficient size may lead to CQ overrun 00:07:18.176 passed 00:07:18.176 00:07:18.176 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.176 suites 1 1 n/a 0 0 00:07:18.176 tests 10 10 10 0 0 00:07:18.176 asserts 584 584 584 0 n/a 00:07:18.176 00:07:18.176 Elapsed time = 0.004 seconds 00:07:18.176 [2024-11-18 14:09:10.196882] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:18.176 [2024-11-18 14:09:10.196959] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:18.176 00:07:18.176 real 0m0.043s 00:07:18.176 user 0m0.009s 00:07:18.176 sys 0m0.035s 00:07:18.176 14:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.176 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.176 ************************************ 00:07:18.176 END TEST unittest_nvmf_rdma 00:07:18.176 ************************************ 00:07:18.436 14:09:10 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.436 14:09:10 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:07:18.436 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.436 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.436 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.436 ************************************ 00:07:18.436 START TEST unittest_scsi 00:07:18.436 ************************************ 00:07:18.436 14:09:10 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:07:18.436 14:09:10 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:18.436 00:07:18.436 00:07:18.436 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.436 http://cunit.sourceforge.net/ 00:07:18.436 00:07:18.436 00:07:18.436 Suite: dev_suite 00:07:18.436 Test: dev_destruct_null_dev ...passed 00:07:18.436 Test: dev_destruct_zero_luns ...passed 00:07:18.436 Test: dev_destruct_null_lun ...passed 00:07:18.436 Test: dev_destruct_success ...passed 00:07:18.436 Test: dev_construct_num_luns_zero ...[2024-11-18 14:09:10.286274] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:18.436 passed 00:07:18.436 Test: dev_construct_no_lun_zero ...[2024-11-18 14:09:10.286566] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:18.436 passed 00:07:18.436 Test: dev_construct_null_lun ...passed 00:07:18.436 Test: dev_construct_name_too_long ...passed 00:07:18.436 Test: dev_construct_success ...passed 00:07:18.436 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:18.437 Test: dev_queue_mgmt_task_success ...[2024-11-18 14:09:10.286620] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:18.437 [2024-11-18 14:09:10.286670] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:18.437 passed 00:07:18.437 Test: dev_queue_task_success ...passed 00:07:18.437 Test: dev_stop_success ...passed 00:07:18.437 Test: dev_add_port_max_ports ...passed 00:07:18.437 Test: dev_add_port_construct_failure1 ...[2024-11-18 14:09:10.286955] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:18.437 [2024-11-18 14:09:10.287048] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:18.437 passed 00:07:18.437 Test: dev_add_port_construct_failure2 ...[2024-11-18 14:09:10.287145] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:18.437 passed 00:07:18.437 Test: dev_add_port_success1 ...passed 00:07:18.437 Test: dev_add_port_success2 ...passed 00:07:18.437 Test: dev_add_port_success3 ...passed 00:07:18.437 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:18.437 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:18.437 Test: dev_find_port_by_id_success ...passed 00:07:18.437 Test: dev_add_lun_bdev_not_found ...passed 00:07:18.437 Test: dev_add_lun_no_free_lun_id ...[2024-11-18 14:09:10.287566] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:18.437 passed 00:07:18.437 Test: dev_add_lun_success1 ...passed 00:07:18.437 Test: dev_add_lun_success2 ...passed 00:07:18.437 Test: dev_check_pending_tasks ...passed 00:07:18.437 Test: dev_iterate_luns ...passed 00:07:18.437 Test: dev_find_free_lun ...passed 00:07:18.437 00:07:18.437 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.437 suites 1 1 n/a 0 0 00:07:18.437 tests 29 29 29 0 0 00:07:18.437 asserts 97 97 97 0 n/a 00:07:18.437 00:07:18.437 Elapsed time = 0.002 seconds 00:07:18.437 14:09:10 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:18.437 00:07:18.437 00:07:18.437 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.437 http://cunit.sourceforge.net/ 00:07:18.437 00:07:18.437 00:07:18.437 Suite: lun_suite 00:07:18.437 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-11-18 14:09:10.324725] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:18.437 passed 00:07:18.437 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-18 14:09:10.325138] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:18.437 passed 00:07:18.437 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:18.437 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:18.437 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:18.437 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-11-18 14:09:10.325339] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:18.437 passed 00:07:18.437 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:18.437 Test: lun_append_task_null_lun_not_supported ...passed 00:07:18.437 Test: lun_execute_scsi_task_pending ...passed 00:07:18.437 Test: lun_execute_scsi_task_complete ...passed 00:07:18.437 Test: lun_execute_scsi_task_resize ...passed 00:07:18.437 Test: lun_destruct_success ...passed 00:07:18.437 Test: lun_construct_null_ctx ...[2024-11-18 14:09:10.325519] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:18.437 passed 00:07:18.437 Test: lun_construct_success ...passed 00:07:18.437 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:07:18.437 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:18.437 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:18.437 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:18.437 00:07:18.437 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.437 suites 1 1 n/a 0 0 00:07:18.437 tests 18 18 18 0 0 00:07:18.437 asserts 153 153 153 0 n/a 00:07:18.437 00:07:18.437 Elapsed time = 0.001 seconds 00:07:18.437 14:09:10 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:18.437 00:07:18.437 00:07:18.437 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.437 http://cunit.sourceforge.net/ 00:07:18.437 00:07:18.437 00:07:18.437 Suite: scsi_suite 00:07:18.437 Test: scsi_init ...passed 00:07:18.437 00:07:18.437 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.437 suites 1 1 n/a 0 0 00:07:18.437 tests 1 1 1 0 0 00:07:18.437 asserts 1 1 1 0 n/a 00:07:18.437 00:07:18.437 Elapsed time = 0.000 seconds 00:07:18.437 14:09:10 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:18.437 00:07:18.437 00:07:18.437 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.437 http://cunit.sourceforge.net/ 00:07:18.437 00:07:18.437 00:07:18.437 Suite: translation_suite 00:07:18.437 Test: mode_select_6_test ...passed 00:07:18.437 Test: mode_select_6_test2 ...passed 00:07:18.437 Test: mode_sense_6_test ...passed 00:07:18.437 Test: mode_sense_10_test ...passed 00:07:18.437 Test: inquiry_evpd_test ...passed 00:07:18.437 Test: inquiry_standard_test ...passed 00:07:18.437 Test: inquiry_overflow_test ...passed 00:07:18.437 Test: task_complete_test ...passed 00:07:18.437 Test: lba_range_test ...passed 00:07:18.437 Test: xfer_len_test ...[2024-11-18 14:09:10.389509] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:18.437 passed 00:07:18.437 Test: xfer_test ...passed 00:07:18.437 Test: scsi_name_padding_test ...passed 00:07:18.437 Test: get_dif_ctx_test ...passed 00:07:18.437 Test: unmap_split_test ...passed 00:07:18.437 00:07:18.437 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.437 suites 1 1 n/a 0 0 00:07:18.437 tests 14 14 14 0 0 00:07:18.437 asserts 1200 1200 1200 0 n/a 00:07:18.437 00:07:18.437 Elapsed time = 0.004 seconds 00:07:18.437 14:09:10 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:18.437 00:07:18.437 00:07:18.437 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.437 http://cunit.sourceforge.net/ 00:07:18.437 00:07:18.437 00:07:18.437 Suite: reservation_suite 00:07:18.437 Test: test_reservation_register ...[2024-11-18 14:09:10.419888] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.437 passed 00:07:18.437 Test: test_reservation_reserve ...[2024-11-18 14:09:10.420260] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.437 [2024-11-18 14:09:10.420343] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:18.437 passed 00:07:18.437 Test: test_reservation_preempt_non_all_regs ...[2024-11-18 14:09:10.420453] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:18.437 [2024-11-18 14:09:10.420525] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.437 [2024-11-18 14:09:10.420611] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:18.437 passed 00:07:18.438 Test: test_reservation_preempt_all_regs ...[2024-11-18 14:09:10.420753] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.438 passed 00:07:18.438 Test: test_reservation_cmds_conflict ...[2024-11-18 14:09:10.420909] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.438 [2024-11-18 14:09:10.420991] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:18.438 [2024-11-18 14:09:10.421056] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:18.438 [2024-11-18 14:09:10.421104] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:18.438 [2024-11-18 14:09:10.421154] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:18.438 [2024-11-18 14:09:10.421191] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:18.438 passed 00:07:18.438 Test: test_scsi2_reserve_release ...passed 00:07:18.438 Test: test_pr_with_scsi2_reserve_release ...[2024-11-18 14:09:10.421302] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:18.438 passed 00:07:18.438 00:07:18.438 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.438 suites 1 1 n/a 0 0 00:07:18.438 tests 7 7 7 0 0 00:07:18.438 asserts 257 257 257 0 n/a 00:07:18.438 00:07:18.438 Elapsed time = 0.002 seconds 00:07:18.438 00:07:18.438 real 0m0.166s 00:07:18.438 user 0m0.099s 00:07:18.438 sys 0m0.069s 00:07:18.438 14:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.438 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.438 ************************************ 00:07:18.438 END TEST unittest_scsi 00:07:18.438 ************************************ 00:07:18.438 14:09:10 -- unit/unittest.sh@252 -- # uname -s 00:07:18.438 14:09:10 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:07:18.438 14:09:10 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:07:18.438 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.438 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.438 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.438 ************************************ 00:07:18.438 START TEST unittest_sock 00:07:18.438 ************************************ 00:07:18.438 14:09:10 -- common/autotest_common.sh@1114 -- # unittest_sock 00:07:18.438 14:09:10 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:18.697 00:07:18.697 00:07:18.697 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.697 http://cunit.sourceforge.net/ 00:07:18.697 00:07:18.697 00:07:18.697 Suite: sock 00:07:18.697 Test: posix_sock ...passed 00:07:18.697 Test: ut_sock ...passed 00:07:18.697 Test: posix_sock_group ...passed 00:07:18.697 Test: ut_sock_group ...passed 00:07:18.697 Test: posix_sock_group_fairness ...passed 00:07:18.697 Test: _posix_sock_close ...passed 00:07:18.697 Test: sock_get_default_opts ...passed 00:07:18.697 Test: ut_sock_impl_get_set_opts ...passed 00:07:18.697 Test: posix_sock_impl_get_set_opts ...passed 00:07:18.697 Test: ut_sock_map ...passed 00:07:18.697 Test: override_impl_opts ...passed 00:07:18.697 Test: ut_sock_group_get_ctx ...passed 00:07:18.697 00:07:18.697 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.697 suites 1 1 n/a 0 0 00:07:18.697 tests 12 12 12 0 0 00:07:18.697 asserts 349 349 349 0 n/a 00:07:18.697 00:07:18.697 Elapsed time = 0.008 seconds 00:07:18.697 14:09:10 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:18.697 00:07:18.697 00:07:18.697 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.697 http://cunit.sourceforge.net/ 00:07:18.697 00:07:18.697 00:07:18.697 Suite: posix 00:07:18.697 Test: flush ...passed 00:07:18.697 00:07:18.697 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.697 suites 1 1 n/a 0 0 00:07:18.697 tests 1 1 1 0 0 00:07:18.697 asserts 28 28 28 0 n/a 00:07:18.697 00:07:18.697 Elapsed time = 0.000 seconds 00:07:18.697 14:09:10 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.697 00:07:18.697 real 0m0.098s 00:07:18.697 user 0m0.032s 00:07:18.697 sys 0m0.043s 00:07:18.697 14:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.697 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.697 ************************************ 00:07:18.697 END TEST unittest_sock 00:07:18.697 ************************************ 00:07:18.697 14:09:10 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:18.697 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.697 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.697 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.697 ************************************ 00:07:18.697 START TEST unittest_thread 00:07:18.697 ************************************ 00:07:18.697 14:09:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:18.697 00:07:18.697 00:07:18.697 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.697 http://cunit.sourceforge.net/ 00:07:18.697 00:07:18.697 00:07:18.697 Suite: io_channel 00:07:18.697 Test: thread_alloc ...passed 00:07:18.697 Test: thread_send_msg ...passed 00:07:18.697 Test: thread_poller ...passed 00:07:18.697 Test: poller_pause ...passed 00:07:18.697 Test: thread_for_each ...passed 00:07:18.697 Test: for_each_channel_remove ...passed 00:07:18.697 Test: for_each_channel_unreg ...[2024-11-18 14:09:10.685562] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x7ffcce791a90 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:18.697 passed 00:07:18.697 Test: thread_name ...passed 00:07:18.697 Test: channel ...[2024-11-18 14:09:10.689675] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x56243623b0e0 00:07:18.697 passed 00:07:18.697 Test: channel_destroy_races ...passed 00:07:18.697 Test: thread_exit_test ...[2024-11-18 14:09:10.694755] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:18.697 passed 00:07:18.697 Test: thread_update_stats_test ...passed 00:07:18.697 Test: nested_channel ...passed 00:07:18.697 Test: device_unregister_and_thread_exit_race ...passed 00:07:18.697 Test: cache_closest_timed_poller ...passed 00:07:18.697 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:18.697 Test: io_device_lookup ...passed 00:07:18.697 Test: spdk_spin ...[2024-11-18 14:09:10.705352] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:18.697 [2024-11-18 14:09:10.705413] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcce791a80 00:07:18.697 [2024-11-18 14:09:10.705517] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:18.697 [2024-11-18 14:09:10.707149] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:18.697 [2024-11-18 14:09:10.707241] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcce791a80 00:07:18.697 [2024-11-18 14:09:10.707283] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:18.697 [2024-11-18 14:09:10.707321] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcce791a80 00:07:18.697 [2024-11-18 14:09:10.707361] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:18.697 [2024-11-18 14:09:10.707406] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcce791a80 00:07:18.697 [2024-11-18 14:09:10.707456] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:18.697 [2024-11-18 14:09:10.707501] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x7ffcce791a80 00:07:18.697 passed 00:07:18.697 Test: for_each_channel_and_thread_exit_race ...passed 00:07:18.697 Test: for_each_thread_and_thread_exit_race ...passed 00:07:18.697 00:07:18.697 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.697 suites 1 1 n/a 0 0 00:07:18.697 tests 20 20 20 0 0 00:07:18.697 asserts 409 409 409 0 n/a 00:07:18.697 00:07:18.697 Elapsed time = 0.050 seconds 00:07:18.697 00:07:18.697 real 0m0.088s 00:07:18.698 user 0m0.076s 00:07:18.698 sys 0m0.012s 00:07:18.698 14:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.698 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.698 ************************************ 00:07:18.698 END TEST unittest_thread 00:07:18.698 ************************************ 00:07:18.957 14:09:10 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:18.957 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.957 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.957 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.957 ************************************ 00:07:18.957 START TEST unittest_iobuf 00:07:18.957 ************************************ 00:07:18.957 14:09:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:18.957 00:07:18.957 00:07:18.957 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.957 http://cunit.sourceforge.net/ 00:07:18.957 00:07:18.957 00:07:18.957 Suite: io_channel 00:07:18.957 Test: iobuf ...passed 00:07:18.957 Test: iobuf_cache ...[2024-11-18 14:09:10.812222] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:18.957 [2024-11-18 14:09:10.812511] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:18.957 [2024-11-18 14:09:10.812656] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:18.957 [2024-11-18 14:09:10.812705] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:18.957 [2024-11-18 14:09:10.812781] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:18.957 [2024-11-18 14:09:10.812821] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:18.957 passed 00:07:18.957 00:07:18.957 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.957 suites 1 1 n/a 0 0 00:07:18.957 tests 2 2 2 0 0 00:07:18.957 asserts 107 107 107 0 n/a 00:07:18.957 00:07:18.957 Elapsed time = 0.006 seconds 00:07:18.957 00:07:18.957 real 0m0.041s 00:07:18.957 user 0m0.020s 00:07:18.957 sys 0m0.021s 00:07:18.957 14:09:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.957 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.957 ************************************ 00:07:18.957 END TEST unittest_iobuf 00:07:18.957 ************************************ 00:07:18.957 14:09:10 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:07:18.957 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.957 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.957 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:07:18.957 ************************************ 00:07:18.957 START TEST unittest_util 00:07:18.957 ************************************ 00:07:18.957 14:09:10 -- common/autotest_common.sh@1114 -- # unittest_util 00:07:18.957 14:09:10 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:18.957 00:07:18.957 00:07:18.957 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.957 http://cunit.sourceforge.net/ 00:07:18.957 00:07:18.957 00:07:18.957 Suite: base64 00:07:18.957 Test: test_base64_get_encoded_strlen ...passed 00:07:18.957 Test: test_base64_get_decoded_len ...passed 00:07:18.957 Test: test_base64_encode ...passed 00:07:18.957 Test: test_base64_decode ...passed 00:07:18.957 Test: test_base64_urlsafe_encode ...passed 00:07:18.957 Test: test_base64_urlsafe_decode ...passed 00:07:18.957 00:07:18.957 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.957 suites 1 1 n/a 0 0 00:07:18.957 tests 6 6 6 0 0 00:07:18.957 asserts 112 112 112 0 n/a 00:07:18.957 00:07:18.957 Elapsed time = 0.000 seconds 00:07:18.957 14:09:10 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:18.957 00:07:18.957 00:07:18.957 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.957 http://cunit.sourceforge.net/ 00:07:18.957 00:07:18.957 00:07:18.957 Suite: bit_array 00:07:18.957 Test: test_1bit ...passed 00:07:18.957 Test: test_64bit ...passed 00:07:18.957 Test: test_find ...passed 00:07:18.957 Test: test_resize ...passed 00:07:18.957 Test: test_errors ...passed 00:07:18.957 Test: test_count ...passed 00:07:18.957 Test: test_mask_store_load ...passed 00:07:18.957 Test: test_mask_clear ...passed 00:07:18.957 00:07:18.958 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.958 suites 1 1 n/a 0 0 00:07:18.958 tests 8 8 8 0 0 00:07:18.958 asserts 5075 5075 5075 0 n/a 00:07:18.958 00:07:18.958 Elapsed time = 0.002 seconds 00:07:18.958 14:09:10 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:18.958 00:07:18.958 00:07:18.958 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.958 http://cunit.sourceforge.net/ 00:07:18.958 00:07:18.958 00:07:18.958 Suite: cpuset 00:07:18.958 Test: test_cpuset ...passed 00:07:18.958 Test: test_cpuset_parse ...[2024-11-18 14:09:10.957545] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:18.958 [2024-11-18 14:09:10.957832] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:18.958 [2024-11-18 14:09:10.957923] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:18.958 [2024-11-18 14:09:10.958006] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:18.958 [2024-11-18 14:09:10.958044] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:18.958 [2024-11-18 14:09:10.958085] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:18.958 [2024-11-18 14:09:10.958121] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:18.958 passed 00:07:18.958 Test: test_cpuset_fmt ...[2024-11-18 14:09:10.958174] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:18.958 passed 00:07:18.958 00:07:18.958 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.958 suites 1 1 n/a 0 0 00:07:18.958 tests 3 3 3 0 0 00:07:18.958 asserts 65 65 65 0 n/a 00:07:18.958 00:07:18.958 Elapsed time = 0.003 seconds 00:07:18.958 14:09:10 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:18.958 00:07:18.958 00:07:18.958 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.958 http://cunit.sourceforge.net/ 00:07:18.958 00:07:18.958 00:07:18.958 Suite: crc16 00:07:18.958 Test: test_crc16_t10dif ...passed 00:07:18.958 Test: test_crc16_t10dif_seed ...passed 00:07:18.958 Test: test_crc16_t10dif_copy ...passed 00:07:18.958 00:07:18.958 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.958 suites 1 1 n/a 0 0 00:07:18.958 tests 3 3 3 0 0 00:07:18.958 asserts 5 5 5 0 n/a 00:07:18.958 00:07:18.958 Elapsed time = 0.000 seconds 00:07:18.958 14:09:11 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:18.958 00:07:18.958 00:07:18.958 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.958 http://cunit.sourceforge.net/ 00:07:18.958 00:07:18.958 00:07:18.958 Suite: crc32_ieee 00:07:18.958 Test: test_crc32_ieee ...passed 00:07:18.958 00:07:18.958 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.958 suites 1 1 n/a 0 0 00:07:18.958 tests 1 1 1 0 0 00:07:18.958 asserts 1 1 1 0 n/a 00:07:18.958 00:07:18.958 Elapsed time = 0.000 seconds 00:07:18.958 14:09:11 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:19.220 00:07:19.220 00:07:19.220 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.220 http://cunit.sourceforge.net/ 00:07:19.220 00:07:19.220 00:07:19.220 Suite: crc32c 00:07:19.220 Test: test_crc32c ...passed 00:07:19.220 Test: test_crc32c_nvme ...passed 00:07:19.220 00:07:19.220 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.220 suites 1 1 n/a 0 0 00:07:19.220 tests 2 2 2 0 0 00:07:19.220 asserts 16 16 16 0 n/a 00:07:19.220 00:07:19.220 Elapsed time = 0.000 seconds 00:07:19.220 14:09:11 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:19.220 00:07:19.220 00:07:19.220 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.220 http://cunit.sourceforge.net/ 00:07:19.220 00:07:19.220 00:07:19.220 Suite: crc64 00:07:19.220 Test: test_crc64_nvme ...passed 00:07:19.220 00:07:19.220 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.220 suites 1 1 n/a 0 0 00:07:19.220 tests 1 1 1 0 0 00:07:19.220 asserts 4 4 4 0 n/a 00:07:19.220 00:07:19.220 Elapsed time = 0.001 seconds 00:07:19.220 14:09:11 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:19.220 00:07:19.220 00:07:19.220 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.220 http://cunit.sourceforge.net/ 00:07:19.220 00:07:19.220 00:07:19.220 Suite: string 00:07:19.220 Test: test_parse_ip_addr ...passed 00:07:19.220 Test: test_str_chomp ...passed 00:07:19.220 Test: test_parse_capacity ...passed 00:07:19.220 Test: test_sprintf_append_realloc ...passed 00:07:19.220 Test: test_strtol ...passed 00:07:19.220 Test: test_strtoll ...passed 00:07:19.220 Test: test_strarray ...passed 00:07:19.220 Test: test_strcpy_replace ...passed 00:07:19.220 00:07:19.220 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.220 suites 1 1 n/a 0 0 00:07:19.220 tests 8 8 8 0 0 00:07:19.220 asserts 161 161 161 0 n/a 00:07:19.220 00:07:19.220 Elapsed time = 0.001 seconds 00:07:19.220 14:09:11 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:19.220 00:07:19.220 00:07:19.220 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.220 http://cunit.sourceforge.net/ 00:07:19.220 00:07:19.220 00:07:19.220 Suite: dif 00:07:19.220 Test: dif_generate_and_verify_test ...[2024-11-18 14:09:11.136709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:19.220 [2024-11-18 14:09:11.137334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:19.220 [2024-11-18 14:09:11.137635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:19.220 [2024-11-18 14:09:11.137928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:19.220 [2024-11-18 14:09:11.138211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:19.220 [2024-11-18 14:09:11.138513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:19.220 passed 00:07:19.220 Test: dif_disable_check_test ...[2024-11-18 14:09:11.139617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:19.220 [2024-11-18 14:09:11.139972] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:19.220 [2024-11-18 14:09:11.140267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:19.220 passed 00:07:19.220 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-18 14:09:11.141346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:19.220 [2024-11-18 14:09:11.141673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:19.220 [2024-11-18 14:09:11.141995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:19.220 [2024-11-18 14:09:11.142347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:19.220 [2024-11-18 14:09:11.142678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.220 [2024-11-18 14:09:11.142993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.220 [2024-11-18 14:09:11.143316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.220 [2024-11-18 14:09:11.143624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:19.220 [2024-11-18 14:09:11.143929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:19.220 [2024-11-18 14:09:11.144252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:19.220 [2024-11-18 14:09:11.144576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:19.220 passed 00:07:19.220 Test: dif_apptag_mask_test ...[2024-11-18 14:09:11.144905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:19.220 [2024-11-18 14:09:11.145209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:19.220 passed 00:07:19.220 Test: dif_sec_512_md_0_error_test ...[2024-11-18 14:09:11.145401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.220 passed 00:07:19.220 Test: dif_sec_4096_md_0_error_test ...[2024-11-18 14:09:11.145452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.220 passed 00:07:19.220 Test: dif_sec_4100_md_128_error_test ...[2024-11-18 14:09:11.145496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.220 [2024-11-18 14:09:11.145550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:19.220 [2024-11-18 14:09:11.145593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:19.220 passed 00:07:19.220 Test: dif_guard_seed_test ...passed 00:07:19.220 Test: dif_guard_value_test ...passed 00:07:19.220 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:19.220 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.220 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.221 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 14:09:11.189830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd0c, Actual=fd4c 00:07:19.221 [2024-11-18 14:09:11.192304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe61, Actual=fe21 00:07:19.221 [2024-11-18 14:09:11.194769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.197244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.199730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40005f 00:07:19.221 [2024-11-18 14:09:11.202184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40005f 00:07:19.221 [2024-11-18 14:09:11.204656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=e4b0 00:07:19.221 [2024-11-18 14:09:11.206773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe21, Actual=f61e 00:07:19.221 [2024-11-18 14:09:11.208890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1af753ed, Actual=1ab753ed 00:07:19.221 [2024-11-18 14:09:11.211349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38174660, Actual=38574660 00:07:19.221 [2024-11-18 14:09:11.213843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.216303] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.218766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000000005f 00:07:19.221 [2024-11-18 14:09:11.221242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000000005f 00:07:19.221 [2024-11-18 14:09:11.223714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=117e437c 00:07:19.221 [2024-11-18 14:09:11.225823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38574660, Actual=af50d94f 00:07:19.221 [2024-11-18 14:09:11.227949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.221 [2024-11-18 14:09:11.230414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:07:19.221 [2024-11-18 14:09:11.232888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.235358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.237808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000005f 00:07:19.221 [2024-11-18 14:09:11.240272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000005f 00:07:19.221 [2024-11-18 14:09:11.242754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.221 [2024-11-18 14:09:11.244874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4837a266, Actual=1c6ff717af82a191 00:07:19.221 passed 00:07:19.221 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-18 14:09:11.246072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:19.221 [2024-11-18 14:09:11.246378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:19.221 [2024-11-18 14:09:11.246677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.246990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.247333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.221 [2024-11-18 14:09:11.247652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.221 [2024-11-18 14:09:11.247964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e4b0 00:07:19.221 [2024-11-18 14:09:11.248174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f61e 00:07:19.221 [2024-11-18 14:09:11.248379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:07:19.221 [2024-11-18 14:09:11.248676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:07:19.221 [2024-11-18 14:09:11.249009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.249314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.249618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.221 [2024-11-18 14:09:11.249915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.221 [2024-11-18 14:09:11.250208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=117e437c 00:07:19.221 [2024-11-18 14:09:11.250411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=af50d94f 00:07:19.221 [2024-11-18 14:09:11.250633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.221 [2024-11-18 14:09:11.250931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:07:19.221 [2024-11-18 14:09:11.251250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.251542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.251845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.221 [2024-11-18 14:09:11.252140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.221 [2024-11-18 14:09:11.252450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.221 [2024-11-18 14:09:11.252664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1c6ff717af82a191 00:07:19.221 passed 00:07:19.221 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-18 14:09:11.252919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:19.221 [2024-11-18 14:09:11.253226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:19.221 [2024-11-18 14:09:11.253528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.253822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.254136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.221 [2024-11-18 14:09:11.254433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.221 [2024-11-18 14:09:11.254727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e4b0 00:07:19.221 [2024-11-18 14:09:11.254937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f61e 00:07:19.221 [2024-11-18 14:09:11.255142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:07:19.221 [2024-11-18 14:09:11.255454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:07:19.221 [2024-11-18 14:09:11.255747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.256049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.221 [2024-11-18 14:09:11.256352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.221 [2024-11-18 14:09:11.256654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.221 [2024-11-18 14:09:11.256963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=117e437c 00:07:19.221 [2024-11-18 14:09:11.257161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=af50d94f 00:07:19.221 [2024-11-18 14:09:11.257385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.221 [2024-11-18 14:09:11.257685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:07:19.222 [2024-11-18 14:09:11.257994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.258305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.258617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.222 [2024-11-18 14:09:11.258931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.222 [2024-11-18 14:09:11.259263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.222 [2024-11-18 14:09:11.259478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1c6ff717af82a191 00:07:19.222 passed 00:07:19.222 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-18 14:09:11.259731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:19.222 [2024-11-18 14:09:11.260057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:19.222 [2024-11-18 14:09:11.260374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.260700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.261050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.222 [2024-11-18 14:09:11.261361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.222 [2024-11-18 14:09:11.261670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e4b0 00:07:19.222 [2024-11-18 14:09:11.261883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f61e 00:07:19.222 [2024-11-18 14:09:11.262114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:07:19.222 [2024-11-18 14:09:11.262419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:07:19.222 [2024-11-18 14:09:11.262739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.263041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.263357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.222 [2024-11-18 14:09:11.263688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.222 [2024-11-18 14:09:11.263994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=117e437c 00:07:19.222 [2024-11-18 14:09:11.264204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=af50d94f 00:07:19.222 [2024-11-18 14:09:11.264442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.222 [2024-11-18 14:09:11.264759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:07:19.222 [2024-11-18 14:09:11.265080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.265395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.265699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.222 [2024-11-18 14:09:11.266006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.222 [2024-11-18 14:09:11.266337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.222 [2024-11-18 14:09:11.266553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1c6ff717af82a191 00:07:19.222 passed 00:07:19.222 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-18 14:09:11.266812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:19.222 [2024-11-18 14:09:11.267122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:19.222 [2024-11-18 14:09:11.267435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.267747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.268071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.222 [2024-11-18 14:09:11.268374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.222 [2024-11-18 14:09:11.268676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e4b0 00:07:19.222 [2024-11-18 14:09:11.268903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f61e 00:07:19.222 passed 00:07:19.222 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-18 14:09:11.269171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:07:19.222 [2024-11-18 14:09:11.269472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:07:19.222 [2024-11-18 14:09:11.269791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.270090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.270398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.222 [2024-11-18 14:09:11.270689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.222 [2024-11-18 14:09:11.271000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=117e437c 00:07:19.222 [2024-11-18 14:09:11.271225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=af50d94f 00:07:19.222 [2024-11-18 14:09:11.271489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.222 [2024-11-18 14:09:11.271794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:07:19.222 [2024-11-18 14:09:11.272100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.272408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.272711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.222 [2024-11-18 14:09:11.273043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.222 [2024-11-18 14:09:11.273370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.222 [2024-11-18 14:09:11.273583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1c6ff717af82a191 00:07:19.222 passed 00:07:19.222 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-18 14:09:11.273831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:07:19.222 [2024-11-18 14:09:11.274141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:07:19.222 [2024-11-18 14:09:11.274442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.274751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.222 [2024-11-18 14:09:11.275077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.222 [2024-11-18 14:09:11.275393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:07:19.222 [2024-11-18 14:09:11.275721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e4b0 00:07:19.222 [2024-11-18 14:09:11.275925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f61e 00:07:19.222 passed 00:07:19.222 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-18 14:09:11.276191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:07:19.222 [2024-11-18 14:09:11.276496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:07:19.222 [2024-11-18 14:09:11.276817] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.223 [2024-11-18 14:09:11.277151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.223 [2024-11-18 14:09:11.277457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.223 [2024-11-18 14:09:11.277763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000000058 00:07:19.223 [2024-11-18 14:09:11.278069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=117e437c 00:07:19.223 [2024-11-18 14:09:11.278280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=af50d94f 00:07:19.223 [2024-11-18 14:09:11.278540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.223 [2024-11-18 14:09:11.278849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88410a2d4837a266, Actual=88010a2d4837a266 00:07:19.223 [2024-11-18 14:09:11.279168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.223 [2024-11-18 14:09:11.279488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:07:19.223 [2024-11-18 14:09:11.279799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.223 [2024-11-18 14:09:11.280101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000058 00:07:19.223 [2024-11-18 14:09:11.280425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.223 [2024-11-18 14:09:11.280644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=1c6ff717af82a191 00:07:19.223 passed 00:07:19.223 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:19.223 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:19.223 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:19.484 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.484 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:19.484 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:19.484 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.484 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:19.484 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.484 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 14:09:11.324534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd0c, Actual=fd4c 00:07:19.484 [2024-11-18 14:09:11.325670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b4d6, Actual=b496 00:07:19.484 [2024-11-18 14:09:11.326801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.327926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.329062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40005f 00:07:19.484 [2024-11-18 14:09:11.330195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40005f 00:07:19.484 [2024-11-18 14:09:11.331323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=e4b0 00:07:19.484 [2024-11-18 14:09:11.332441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=d65 00:07:19.484 [2024-11-18 14:09:11.333575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1af753ed, Actual=1ab753ed 00:07:19.484 [2024-11-18 14:09:11.334706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=ece1398a, Actual=eca1398a 00:07:19.484 [2024-11-18 14:09:11.335845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.336998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.338127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000000005f 00:07:19.484 [2024-11-18 14:09:11.339261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000000005f 00:07:19.484 [2024-11-18 14:09:11.340388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=117e437c 00:07:19.484 [2024-11-18 14:09:11.341518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=2f0cdb35 00:07:19.484 [2024-11-18 14:09:11.342647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.484 [2024-11-18 14:09:11.343803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=6856281af41d7320, Actual=6816281af41d7320 00:07:19.484 [2024-11-18 14:09:11.344954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.346096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.347231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000005f 00:07:19.484 [2024-11-18 14:09:11.348368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000005f 00:07:19.484 [2024-11-18 14:09:11.349492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.484 passed 00:07:19.484 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 14:09:11.350650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=c9398622b722b95d 00:07:19.484 [2024-11-18 14:09:11.351001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd0c, Actual=fd4c 00:07:19.484 [2024-11-18 14:09:11.351293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=234c, Actual=230c 00:07:19.484 [2024-11-18 14:09:11.351583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.351857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.352152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:07:19.484 [2024-11-18 14:09:11.352467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:07:19.484 [2024-11-18 14:09:11.352735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e4b0 00:07:19.484 [2024-11-18 14:09:11.353024] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=9aff 00:07:19.484 [2024-11-18 14:09:11.353300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1af753ed, Actual=1ab753ed 00:07:19.484 [2024-11-18 14:09:11.353580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cf572dfd, Actual=cf172dfd 00:07:19.484 [2024-11-18 14:09:11.353873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.354155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.484 [2024-11-18 14:09:11.354436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000000059 00:07:19.484 [2024-11-18 14:09:11.354721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000000059 00:07:19.484 [2024-11-18 14:09:11.355003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=117e437c 00:07:19.484 [2024-11-18 14:09:11.355290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=cbacf42 00:07:19.484 [2024-11-18 14:09:11.355578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.484 [2024-11-18 14:09:11.355864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7d29b369edcefce5, Actual=7d69b369edcefce5 00:07:19.485 [2024-11-18 14:09:11.356147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.356431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.356712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000059 00:07:19.485 [2024-11-18 14:09:11.356995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000059 00:07:19.485 [2024-11-18 14:09:11.357300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.485 [2024-11-18 14:09:11.357581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=dc461d51aef13698 00:07:19.485 passed 00:07:19.485 Test: dix_sec_512_md_0_error ...[2024-11-18 14:09:11.357660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:19.485 passed 00:07:19.485 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:07:19.485 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:19.485 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:19.485 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:19.485 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:19.485 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:19.485 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:19.485 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:19.485 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:19.485 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 14:09:11.401109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd0c, Actual=fd4c 00:07:19.485 [2024-11-18 14:09:11.402242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b4d6, Actual=b496 00:07:19.485 [2024-11-18 14:09:11.403932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.405541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.407531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40005f 00:07:19.485 [2024-11-18 14:09:11.408679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=40005f 00:07:19.485 [2024-11-18 14:09:11.409806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=e4b0 00:07:19.485 [2024-11-18 14:09:11.411750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=d65 00:07:19.485 [2024-11-18 14:09:11.413329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1af753ed, Actual=1ab753ed 00:07:19.485 [2024-11-18 14:09:11.414888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=ece1398a, Actual=eca1398a 00:07:19.485 [2024-11-18 14:09:11.416473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.418072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.419663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000000005f 00:07:19.485 [2024-11-18 14:09:11.421175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=4000000000005f 00:07:19.485 [2024-11-18 14:09:11.422308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=117e437c 00:07:19.485 [2024-11-18 14:09:11.423448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=2f0cdb35 00:07:19.485 [2024-11-18 14:09:11.424601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.485 [2024-11-18 14:09:11.425761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=6856281af41d7320, Actual=6816281af41d7320 00:07:19.485 [2024-11-18 14:09:11.426894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.428064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.429230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000005f 00:07:19.485 [2024-11-18 14:09:11.430364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=400000005f 00:07:19.485 [2024-11-18 14:09:11.431534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.485 passed 00:07:19.485 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 14:09:11.432664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=c9398622b722b95d 00:07:19.485 [2024-11-18 14:09:11.433050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd0c, Actual=fd4c 00:07:19.485 [2024-11-18 14:09:11.433351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=234c, Actual=230c 00:07:19.485 [2024-11-18 14:09:11.433645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.433947] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.434263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:07:19.485 [2024-11-18 14:09:11.434558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:07:19.485 [2024-11-18 14:09:11.434846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=e4b0 00:07:19.485 [2024-11-18 14:09:11.435135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=9aff 00:07:19.485 [2024-11-18 14:09:11.435453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1af753ed, Actual=1ab753ed 00:07:19.485 [2024-11-18 14:09:11.435761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cf572dfd, Actual=cf172dfd 00:07:19.485 [2024-11-18 14:09:11.436079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.436368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.436658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000000059 00:07:19.485 [2024-11-18 14:09:11.436964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=40000000000059 00:07:19.485 [2024-11-18 14:09:11.437250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=117e437c 00:07:19.485 [2024-11-18 14:09:11.437553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=cbacf42 00:07:19.485 [2024-11-18 14:09:11.437859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a536a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:19.485 [2024-11-18 14:09:11.438156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=7d29b369edcefce5, Actual=7d69b369edcefce5 00:07:19.485 [2024-11-18 14:09:11.438455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.438750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:07:19.485 [2024-11-18 14:09:11.439016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000059 00:07:19.485 [2024-11-18 14:09:11.439317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=4000000059 00:07:19.485 [2024-11-18 14:09:11.439598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=3717f0dcc3f68ff9 00:07:19.485 [2024-11-18 14:09:11.439886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=dc461d51aef13698 00:07:19.485 passed 00:07:19.485 Test: set_md_interleave_iovs_test ...passed 00:07:19.485 Test: set_md_interleave_iovs_split_test ...passed 00:07:19.485 Test: dif_generate_stream_pi_16_test ...passed 00:07:19.485 Test: dif_generate_stream_test ...passed 00:07:19.486 Test: set_md_interleave_iovs_alignment_test ...[2024-11-18 14:09:11.447406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:19.486 passed 00:07:19.486 Test: dif_generate_split_test ...passed 00:07:19.486 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:19.486 Test: dif_verify_split_test ...passed 00:07:19.486 Test: dif_verify_stream_multi_segments_test ...passed 00:07:19.486 Test: update_crc32c_pi_16_test ...passed 00:07:19.486 Test: update_crc32c_test ...passed 00:07:19.486 Test: dif_update_crc32c_split_test ...passed 00:07:19.486 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:19.486 Test: get_range_with_md_test ...passed 00:07:19.486 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:19.486 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:19.486 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:19.486 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:19.486 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:19.486 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:19.486 Test: dif_generate_and_verify_unmap_test ...passed 00:07:19.486 00:07:19.486 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.486 suites 1 1 n/a 0 0 00:07:19.486 tests 79 79 79 0 0 00:07:19.486 asserts 3584 3584 3584 0 n/a 00:07:19.486 00:07:19.486 Elapsed time = 0.355 seconds 00:07:19.486 14:09:11 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:19.486 00:07:19.486 00:07:19.486 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.486 http://cunit.sourceforge.net/ 00:07:19.486 00:07:19.486 00:07:19.486 Suite: iov 00:07:19.486 Test: test_single_iov ...passed 00:07:19.486 Test: test_simple_iov ...passed 00:07:19.486 Test: test_complex_iov ...passed 00:07:19.486 Test: test_iovs_to_buf ...passed 00:07:19.486 Test: test_buf_to_iovs ...passed 00:07:19.486 Test: test_memset ...passed 00:07:19.486 Test: test_iov_one ...passed 00:07:19.486 Test: test_iov_xfer ...passed 00:07:19.486 00:07:19.486 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.486 suites 1 1 n/a 0 0 00:07:19.486 tests 8 8 8 0 0 00:07:19.486 asserts 156 156 156 0 n/a 00:07:19.486 00:07:19.486 Elapsed time = 0.000 seconds 00:07:19.486 14:09:11 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:19.745 00:07:19.745 00:07:19.745 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.745 http://cunit.sourceforge.net/ 00:07:19.745 00:07:19.745 00:07:19.745 Suite: math 00:07:19.745 Test: test_serial_number_arithmetic ...passed 00:07:19.745 Suite: erase 00:07:19.745 Test: test_memset_s ...passed 00:07:19.745 00:07:19.745 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.745 suites 2 2 n/a 0 0 00:07:19.745 tests 2 2 2 0 0 00:07:19.745 asserts 18 18 18 0 n/a 00:07:19.745 00:07:19.745 Elapsed time = 0.000 seconds 00:07:19.745 14:09:11 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:19.745 00:07:19.745 00:07:19.745 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.745 http://cunit.sourceforge.net/ 00:07:19.745 00:07:19.745 00:07:19.745 Suite: pipe 00:07:19.745 Test: test_create_destroy ...passed 00:07:19.745 Test: test_write_get_buffer ...passed 00:07:19.745 Test: test_write_advance ...passed 00:07:19.745 Test: test_read_get_buffer ...passed 00:07:19.745 Test: test_read_advance ...passed 00:07:19.745 Test: test_data ...passed 00:07:19.745 00:07:19.745 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.745 suites 1 1 n/a 0 0 00:07:19.745 tests 6 6 6 0 0 00:07:19.745 asserts 250 250 250 0 n/a 00:07:19.745 00:07:19.745 Elapsed time = 0.000 seconds 00:07:19.745 14:09:11 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:19.745 00:07:19.745 00:07:19.745 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.745 http://cunit.sourceforge.net/ 00:07:19.745 00:07:19.745 00:07:19.745 Suite: xor 00:07:19.745 Test: test_xor_gen ...passed 00:07:19.745 00:07:19.745 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.745 suites 1 1 n/a 0 0 00:07:19.745 tests 1 1 1 0 0 00:07:19.745 asserts 17 17 17 0 n/a 00:07:19.745 00:07:19.745 Elapsed time = 0.007 seconds 00:07:19.745 00:07:19.745 real 0m0.750s 00:07:19.745 user 0m0.602s 00:07:19.745 sys 0m0.149s 00:07:19.745 14:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.745 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:19.745 ************************************ 00:07:19.745 END TEST unittest_util 00:07:19.745 ************************************ 00:07:19.745 14:09:11 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:19.745 14:09:11 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:19.745 14:09:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.745 14:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.745 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:19.745 ************************************ 00:07:19.745 START TEST unittest_vhost 00:07:19.745 ************************************ 00:07:19.745 14:09:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:19.745 00:07:19.745 00:07:19.745 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.745 http://cunit.sourceforge.net/ 00:07:19.745 00:07:19.745 00:07:19.745 Suite: vhost_suite 00:07:19.745 Test: desc_to_iov_test ...[2024-11-18 14:09:11.717298] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:19.745 passed 00:07:19.745 Test: create_controller_test ...[2024-11-18 14:09:11.721518] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:19.745 [2024-11-18 14:09:11.721654] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:19.745 [2024-11-18 14:09:11.721792] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:19.745 [2024-11-18 14:09:11.721896] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:19.745 [2024-11-18 14:09:11.721950] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:19.746 [2024-11-18 14:09:11.722054] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxpassed 00:07:19.746 Test: session_find_by_vid_test ...[2024-11-18 14:09:11.723031] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:19.746 passed 00:07:19.746 Test: remove_controller_test ...[2024-11-18 14:09:11.725037] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:19.746 passed 00:07:19.746 Test: vq_avail_ring_get_test ...passed 00:07:19.746 Test: vq_packed_ring_test ...passed 00:07:19.746 Test: vhost_blk_construct_test ...passed 00:07:19.746 00:07:19.746 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.746 suites 1 1 n/a 0 0 00:07:19.746 tests 7 7 7 0 0 00:07:19.746 asserts 145 145 145 0 n/a 00:07:19.746 00:07:19.746 Elapsed time = 0.012 seconds 00:07:19.746 00:07:19.746 real 0m0.049s 00:07:19.746 user 0m0.028s 00:07:19.746 sys 0m0.021s 00:07:19.746 14:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.746 ************************************ 00:07:19.746 END TEST unittest_vhost 00:07:19.746 ************************************ 00:07:19.746 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:19.746 14:09:11 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:19.746 14:09:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.746 14:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.746 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:19.746 ************************************ 00:07:19.746 START TEST unittest_dma 00:07:19.746 ************************************ 00:07:19.746 14:09:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:19.746 00:07:19.746 00:07:19.746 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.746 http://cunit.sourceforge.net/ 00:07:19.746 00:07:19.746 00:07:19.746 Suite: dma_suite 00:07:19.746 Test: test_dma ...[2024-11-18 14:09:11.813217] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:19.746 passed 00:07:19.746 00:07:19.746 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.746 suites 1 1 n/a 0 0 00:07:19.746 tests 1 1 1 0 0 00:07:19.746 asserts 50 50 50 0 n/a 00:07:19.746 00:07:19.746 Elapsed time = 0.001 seconds 00:07:20.005 00:07:20.005 real 0m0.033s 00:07:20.005 user 0m0.012s 00:07:20.005 sys 0m0.022s 00:07:20.005 14:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.005 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.005 ************************************ 00:07:20.005 END TEST unittest_dma 00:07:20.005 ************************************ 00:07:20.005 14:09:11 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:07:20.005 14:09:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.005 14:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.005 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.005 ************************************ 00:07:20.005 START TEST unittest_init 00:07:20.005 ************************************ 00:07:20.005 14:09:11 -- common/autotest_common.sh@1114 -- # unittest_init 00:07:20.005 14:09:11 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:20.005 00:07:20.005 00:07:20.005 CUnit - A unit testing framework for C - Version 2.1-3 00:07:20.005 http://cunit.sourceforge.net/ 00:07:20.005 00:07:20.005 00:07:20.005 Suite: subsystem_suite 00:07:20.005 Test: subsystem_sort_test_depends_on_single ...passed 00:07:20.005 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:20.005 Test: subsystem_sort_test_missing_dependency ...[2024-11-18 14:09:11.901917] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:20.005 [2024-11-18 14:09:11.902197] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:20.005 passed 00:07:20.005 00:07:20.005 Run Summary: Type Total Ran Passed Failed Inactive 00:07:20.005 suites 1 1 n/a 0 0 00:07:20.005 tests 3 3 3 0 0 00:07:20.005 asserts 20 20 20 0 n/a 00:07:20.005 00:07:20.005 Elapsed time = 0.000 seconds 00:07:20.005 00:07:20.005 real 0m0.034s 00:07:20.005 user 0m0.018s 00:07:20.005 sys 0m0.016s 00:07:20.005 14:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.005 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.005 ************************************ 00:07:20.005 END TEST unittest_init 00:07:20.005 ************************************ 00:07:20.005 14:09:11 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:07:20.005 14:09:11 -- unit/unittest.sh@266 -- # hostname 00:07:20.005 14:09:11 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:20.264 geninfo: WARNING: invalid characters removed from testname! 00:07:46.815 14:09:35 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:47.750 14:09:39 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:50.284 14:09:42 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:52.816 14:09:44 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:55.346 14:09:47 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:57.883 14:09:49 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:00.415 14:09:52 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:00.415 14:09:52 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:00.983 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:00.983 Found 309 entries. 00:08:00.983 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:00.983 Writing .css and .png files. 00:08:00.983 Generating output. 00:08:00.983 Processing file include/linux/virtio_ring.h 00:08:01.242 Processing file include/spdk/nvme.h 00:08:01.242 Processing file include/spdk/histogram_data.h 00:08:01.242 Processing file include/spdk/bdev_module.h 00:08:01.242 Processing file include/spdk/endian.h 00:08:01.242 Processing file include/spdk/nvmf_transport.h 00:08:01.242 Processing file include/spdk/util.h 00:08:01.242 Processing file include/spdk/trace.h 00:08:01.242 Processing file include/spdk/mmio.h 00:08:01.242 Processing file include/spdk/thread.h 00:08:01.242 Processing file include/spdk/nvme_spec.h 00:08:01.242 Processing file include/spdk/base64.h 00:08:01.500 Processing file include/spdk_internal/virtio.h 00:08:01.500 Processing file include/spdk_internal/rdma.h 00:08:01.500 Processing file include/spdk_internal/sgl.h 00:08:01.500 Processing file include/spdk_internal/utf.h 00:08:01.500 Processing file include/spdk_internal/sock.h 00:08:01.500 Processing file include/spdk_internal/nvme_tcp.h 00:08:01.500 Processing file lib/accel/accel_rpc.c 00:08:01.500 Processing file lib/accel/accel.c 00:08:01.500 Processing file lib/accel/accel_sw.c 00:08:01.759 Processing file lib/bdev/bdev_zone.c 00:08:01.759 Processing file lib/bdev/bdev.c 00:08:01.759 Processing file lib/bdev/scsi_nvme.c 00:08:01.759 Processing file lib/bdev/bdev_rpc.c 00:08:01.759 Processing file lib/bdev/part.c 00:08:02.018 Processing file lib/blob/blob_bs_dev.c 00:08:02.018 Processing file lib/blob/request.c 00:08:02.018 Processing file lib/blob/zeroes.c 00:08:02.018 Processing file lib/blob/blobstore.c 00:08:02.018 Processing file lib/blob/blobstore.h 00:08:02.276 Processing file lib/blobfs/tree.c 00:08:02.276 Processing file lib/blobfs/blobfs.c 00:08:02.276 Processing file lib/conf/conf.c 00:08:02.276 Processing file lib/dma/dma.c 00:08:02.535 Processing file lib/env_dpdk/threads.c 00:08:02.535 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:02.535 Processing file lib/env_dpdk/env.c 00:08:02.535 Processing file lib/env_dpdk/memory.c 00:08:02.535 Processing file lib/env_dpdk/pci_ioat.c 00:08:02.535 Processing file lib/env_dpdk/pci_virtio.c 00:08:02.535 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:02.535 Processing file lib/env_dpdk/init.c 00:08:02.535 Processing file lib/env_dpdk/pci_dpdk.c 00:08:02.535 Processing file lib/env_dpdk/pci.c 00:08:02.535 Processing file lib/env_dpdk/pci_event.c 00:08:02.535 Processing file lib/env_dpdk/sigbus_handler.c 00:08:02.535 Processing file lib/env_dpdk/pci_vmd.c 00:08:02.535 Processing file lib/env_dpdk/pci_idxd.c 00:08:02.794 Processing file lib/event/scheduler_static.c 00:08:02.794 Processing file lib/event/log_rpc.c 00:08:02.794 Processing file lib/event/app_rpc.c 00:08:02.794 Processing file lib/event/reactor.c 00:08:02.794 Processing file lib/event/app.c 00:08:03.368 Processing file lib/ftl/ftl_l2p_flat.c 00:08:03.368 Processing file lib/ftl/ftl_debug.h 00:08:03.368 Processing file lib/ftl/ftl_nv_cache.h 00:08:03.368 Processing file lib/ftl/ftl_trace.c 00:08:03.368 Processing file lib/ftl/ftl_io.h 00:08:03.368 Processing file lib/ftl/ftl_writer.c 00:08:03.368 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:03.368 Processing file lib/ftl/ftl_core.c 00:08:03.368 Processing file lib/ftl/ftl_debug.c 00:08:03.368 Processing file lib/ftl/ftl_rq.c 00:08:03.368 Processing file lib/ftl/ftl_nv_cache.c 00:08:03.368 Processing file lib/ftl/ftl_sb.c 00:08:03.368 Processing file lib/ftl/ftl_p2l.c 00:08:03.368 Processing file lib/ftl/ftl_init.c 00:08:03.368 Processing file lib/ftl/ftl_reloc.c 00:08:03.368 Processing file lib/ftl/ftl_l2p_cache.c 00:08:03.368 Processing file lib/ftl/ftl_band_ops.c 00:08:03.368 Processing file lib/ftl/ftl_core.h 00:08:03.368 Processing file lib/ftl/ftl_writer.h 00:08:03.368 Processing file lib/ftl/ftl_l2p.c 00:08:03.368 Processing file lib/ftl/ftl_io.c 00:08:03.368 Processing file lib/ftl/ftl_band.c 00:08:03.368 Processing file lib/ftl/ftl_band.h 00:08:03.368 Processing file lib/ftl/ftl_layout.c 00:08:03.368 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:03.368 Processing file lib/ftl/base/ftl_base_dev.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:03.650 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:03.909 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:03.909 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:03.909 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:03.909 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:03.909 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:03.909 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:04.167 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:04.167 Processing file lib/ftl/utils/ftl_property.h 00:08:04.167 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:04.167 Processing file lib/ftl/utils/ftl_df.h 00:08:04.167 Processing file lib/ftl/utils/ftl_mempool.c 00:08:04.167 Processing file lib/ftl/utils/ftl_conf.c 00:08:04.167 Processing file lib/ftl/utils/ftl_md.c 00:08:04.167 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:04.167 Processing file lib/ftl/utils/ftl_property.c 00:08:04.426 Processing file lib/idxd/idxd_user.c 00:08:04.426 Processing file lib/idxd/idxd.c 00:08:04.426 Processing file lib/idxd/idxd_internal.h 00:08:04.426 Processing file lib/init/subsystem_rpc.c 00:08:04.426 Processing file lib/init/rpc.c 00:08:04.426 Processing file lib/init/subsystem.c 00:08:04.426 Processing file lib/init/json_config.c 00:08:04.426 Processing file lib/ioat/ioat.c 00:08:04.426 Processing file lib/ioat/ioat_internal.h 00:08:04.994 Processing file lib/iscsi/iscsi.h 00:08:04.994 Processing file lib/iscsi/iscsi_subsystem.c 00:08:04.994 Processing file lib/iscsi/task.h 00:08:04.994 Processing file lib/iscsi/tgt_node.c 00:08:04.994 Processing file lib/iscsi/portal_grp.c 00:08:04.994 Processing file lib/iscsi/param.c 00:08:04.994 Processing file lib/iscsi/task.c 00:08:04.994 Processing file lib/iscsi/iscsi_rpc.c 00:08:04.994 Processing file lib/iscsi/init_grp.c 00:08:04.994 Processing file lib/iscsi/conn.c 00:08:04.994 Processing file lib/iscsi/iscsi.c 00:08:04.994 Processing file lib/iscsi/md5.c 00:08:04.994 Processing file lib/json/json_parse.c 00:08:04.994 Processing file lib/json/json_util.c 00:08:04.994 Processing file lib/json/json_write.c 00:08:05.252 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:05.252 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:05.252 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:05.252 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:05.252 Processing file lib/log/log.c 00:08:05.252 Processing file lib/log/log_flags.c 00:08:05.252 Processing file lib/log/log_deprecated.c 00:08:05.252 Processing file lib/lvol/lvol.c 00:08:05.511 Processing file lib/nbd/nbd.c 00:08:05.511 Processing file lib/nbd/nbd_rpc.c 00:08:05.511 Processing file lib/notify/notify.c 00:08:05.511 Processing file lib/notify/notify_rpc.c 00:08:06.447 Processing file lib/nvme/nvme_ns.c 00:08:06.447 Processing file lib/nvme/nvme_internal.h 00:08:06.447 Processing file lib/nvme/nvme_cuse.c 00:08:06.447 Processing file lib/nvme/nvme_qpair.c 00:08:06.447 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:06.447 Processing file lib/nvme/nvme_vfio_user.c 00:08:06.447 Processing file lib/nvme/nvme.c 00:08:06.447 Processing file lib/nvme/nvme_ctrlr.c 00:08:06.447 Processing file lib/nvme/nvme_tcp.c 00:08:06.447 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:06.447 Processing file lib/nvme/nvme_fabric.c 00:08:06.447 Processing file lib/nvme/nvme_rdma.c 00:08:06.447 Processing file lib/nvme/nvme_pcie_internal.h 00:08:06.447 Processing file lib/nvme/nvme_poll_group.c 00:08:06.447 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:06.447 Processing file lib/nvme/nvme_io_msg.c 00:08:06.447 Processing file lib/nvme/nvme_pcie.c 00:08:06.447 Processing file lib/nvme/nvme_ns_cmd.c 00:08:06.447 Processing file lib/nvme/nvme_transport.c 00:08:06.447 Processing file lib/nvme/nvme_pcie_common.c 00:08:06.447 Processing file lib/nvme/nvme_zns.c 00:08:06.447 Processing file lib/nvme/nvme_discovery.c 00:08:06.447 Processing file lib/nvme/nvme_opal.c 00:08:06.447 Processing file lib/nvme/nvme_quirks.c 00:08:07.014 Processing file lib/nvmf/ctrlr_discovery.c 00:08:07.014 Processing file lib/nvmf/subsystem.c 00:08:07.014 Processing file lib/nvmf/rdma.c 00:08:07.014 Processing file lib/nvmf/tcp.c 00:08:07.014 Processing file lib/nvmf/ctrlr.c 00:08:07.014 Processing file lib/nvmf/nvmf.c 00:08:07.014 Processing file lib/nvmf/ctrlr_bdev.c 00:08:07.014 Processing file lib/nvmf/nvmf_internal.h 00:08:07.014 Processing file lib/nvmf/nvmf_rpc.c 00:08:07.014 Processing file lib/nvmf/transport.c 00:08:07.014 Processing file lib/rdma/rdma_verbs.c 00:08:07.014 Processing file lib/rdma/common.c 00:08:07.014 Processing file lib/rpc/rpc.c 00:08:07.273 Processing file lib/scsi/scsi_rpc.c 00:08:07.273 Processing file lib/scsi/port.c 00:08:07.273 Processing file lib/scsi/dev.c 00:08:07.273 Processing file lib/scsi/scsi.c 00:08:07.273 Processing file lib/scsi/lun.c 00:08:07.273 Processing file lib/scsi/scsi_pr.c 00:08:07.273 Processing file lib/scsi/task.c 00:08:07.273 Processing file lib/scsi/scsi_bdev.c 00:08:07.273 Processing file lib/sock/sock.c 00:08:07.273 Processing file lib/sock/sock_rpc.c 00:08:07.531 Processing file lib/thread/iobuf.c 00:08:07.531 Processing file lib/thread/thread.c 00:08:07.531 Processing file lib/trace/trace.c 00:08:07.531 Processing file lib/trace/trace_rpc.c 00:08:07.531 Processing file lib/trace/trace_flags.c 00:08:07.790 Processing file lib/trace_parser/trace.cpp 00:08:07.790 Processing file lib/ut/ut.c 00:08:07.790 Processing file lib/ut_mock/mock.c 00:08:08.358 Processing file lib/util/crc32.c 00:08:08.358 Processing file lib/util/xor.c 00:08:08.358 Processing file lib/util/crc32c.c 00:08:08.358 Processing file lib/util/iov.c 00:08:08.358 Processing file lib/util/uuid.c 00:08:08.358 Processing file lib/util/string.c 00:08:08.358 Processing file lib/util/crc32_ieee.c 00:08:08.358 Processing file lib/util/base64.c 00:08:08.358 Processing file lib/util/bit_array.c 00:08:08.358 Processing file lib/util/file.c 00:08:08.358 Processing file lib/util/strerror_tls.c 00:08:08.358 Processing file lib/util/fd.c 00:08:08.358 Processing file lib/util/cpuset.c 00:08:08.358 Processing file lib/util/zipf.c 00:08:08.358 Processing file lib/util/hexlify.c 00:08:08.358 Processing file lib/util/fd_group.c 00:08:08.358 Processing file lib/util/math.c 00:08:08.358 Processing file lib/util/crc64.c 00:08:08.358 Processing file lib/util/dif.c 00:08:08.358 Processing file lib/util/pipe.c 00:08:08.358 Processing file lib/util/crc16.c 00:08:08.358 Processing file lib/vfio_user/host/vfio_user.c 00:08:08.358 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:08.617 Processing file lib/vhost/rte_vhost_user.c 00:08:08.617 Processing file lib/vhost/vhost_internal.h 00:08:08.617 Processing file lib/vhost/vhost_rpc.c 00:08:08.617 Processing file lib/vhost/vhost_scsi.c 00:08:08.617 Processing file lib/vhost/vhost.c 00:08:08.617 Processing file lib/vhost/vhost_blk.c 00:08:08.617 Processing file lib/virtio/virtio_vhost_user.c 00:08:08.617 Processing file lib/virtio/virtio_vfio_user.c 00:08:08.617 Processing file lib/virtio/virtio.c 00:08:08.617 Processing file lib/virtio/virtio_pci.c 00:08:08.876 Processing file lib/vmd/led.c 00:08:08.876 Processing file lib/vmd/vmd.c 00:08:08.876 Processing file module/accel/dsa/accel_dsa.c 00:08:08.876 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:08.876 Processing file module/accel/error/accel_error.c 00:08:08.876 Processing file module/accel/error/accel_error_rpc.c 00:08:09.135 Processing file module/accel/iaa/accel_iaa.c 00:08:09.135 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:09.135 Processing file module/accel/ioat/accel_ioat.c 00:08:09.135 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:09.135 Processing file module/bdev/aio/bdev_aio.c 00:08:09.135 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:09.394 Processing file module/bdev/delay/vbdev_delay.c 00:08:09.394 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:09.394 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:09.394 Processing file module/bdev/error/vbdev_error.c 00:08:09.394 Processing file module/bdev/ftl/bdev_ftl.c 00:08:09.394 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:09.652 Processing file module/bdev/gpt/gpt.c 00:08:09.652 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:09.652 Processing file module/bdev/gpt/gpt.h 00:08:09.652 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:09.652 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:09.652 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:09.652 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:09.910 Processing file module/bdev/malloc/bdev_malloc.c 00:08:09.910 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:09.910 Processing file module/bdev/null/bdev_null.c 00:08:09.910 Processing file module/bdev/null/bdev_null_rpc.c 00:08:10.169 Processing file module/bdev/nvme/bdev_nvme.c 00:08:10.169 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:10.169 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:10.169 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:10.169 Processing file module/bdev/nvme/vbdev_opal.c 00:08:10.169 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:10.169 Processing file module/bdev/nvme/nvme_rpc.c 00:08:10.427 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:10.427 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:10.686 Processing file module/bdev/raid/raid5f.c 00:08:10.686 Processing file module/bdev/raid/bdev_raid.c 00:08:10.686 Processing file module/bdev/raid/raid1.c 00:08:10.686 Processing file module/bdev/raid/concat.c 00:08:10.686 Processing file module/bdev/raid/raid0.c 00:08:10.686 Processing file module/bdev/raid/bdev_raid.h 00:08:10.686 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:10.686 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:10.686 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:10.686 Processing file module/bdev/split/vbdev_split.c 00:08:10.686 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:10.686 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:10.686 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:10.944 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:10.944 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:10.944 Processing file module/blob/bdev/blob_bdev.c 00:08:10.944 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:10.944 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:11.203 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:11.203 Processing file module/event/subsystems/accel/accel.c 00:08:11.203 Processing file module/event/subsystems/bdev/bdev.c 00:08:11.462 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:11.462 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:11.462 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:11.462 Processing file module/event/subsystems/nbd/nbd.c 00:08:11.721 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:11.721 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:11.721 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:11.721 Processing file module/event/subsystems/scsi/scsi.c 00:08:11.721 Processing file module/event/subsystems/sock/sock.c 00:08:11.979 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:11.979 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:11.979 Processing file module/event/subsystems/vmd/vmd.c 00:08:11.979 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:12.238 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:12.238 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:12.238 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:12.497 Processing file module/sock/sock_kernel.h 00:08:12.497 Processing file module/sock/posix/posix.c 00:08:12.497 Writing directory view page. 00:08:12.497 Overall coverage rate: 00:08:12.497 lines......: 39.1% (39266 of 100435 lines) 00:08:12.497 functions..: 42.8% (3587 of 8384 functions) 00:08:12.497 00:08:12.497 00:08:12.497 ===================== 00:08:12.497 All unit tests passed 00:08:12.497 ===================== 00:08:12.497 WARN: lcov not installed or SPDK built without coverage! 00:08:12.497 14:10:04 -- unit/unittest.sh@277 -- # set +x 00:08:12.497 00:08:12.497 00:08:12.497 00:08:12.497 real 2m46.756s 00:08:12.497 user 2m21.873s 00:08:12.497 sys 0m14.282s 00:08:12.497 14:10:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.497 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 ************************************ 00:08:12.497 END TEST unittest 00:08:12.497 ************************************ 00:08:12.497 14:10:04 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:08:12.497 14:10:04 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:12.497 14:10:04 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:08:12.497 14:10:04 -- spdk/autotest.sh@160 -- # timing_enter lib 00:08:12.497 14:10:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.497 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 14:10:04 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:12.497 14:10:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.497 14:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.497 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:08:12.497 ************************************ 00:08:12.497 START TEST env 00:08:12.497 ************************************ 00:08:12.497 14:10:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:12.755 * Looking for test storage... 00:08:12.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:12.755 14:10:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.755 14:10:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.755 14:10:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:12.755 14:10:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:12.755 14:10:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:12.755 14:10:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:12.755 14:10:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:12.755 14:10:04 -- scripts/common.sh@335 -- # IFS=.-: 00:08:12.755 14:10:04 -- scripts/common.sh@335 -- # read -ra ver1 00:08:12.755 14:10:04 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.755 14:10:04 -- scripts/common.sh@336 -- # read -ra ver2 00:08:12.755 14:10:04 -- scripts/common.sh@337 -- # local 'op=<' 00:08:12.755 14:10:04 -- scripts/common.sh@339 -- # ver1_l=2 00:08:12.755 14:10:04 -- scripts/common.sh@340 -- # ver2_l=1 00:08:12.755 14:10:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:12.755 14:10:04 -- scripts/common.sh@343 -- # case "$op" in 00:08:12.755 14:10:04 -- scripts/common.sh@344 -- # : 1 00:08:12.755 14:10:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:12.755 14:10:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.755 14:10:04 -- scripts/common.sh@364 -- # decimal 1 00:08:12.755 14:10:04 -- scripts/common.sh@352 -- # local d=1 00:08:12.755 14:10:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.755 14:10:04 -- scripts/common.sh@354 -- # echo 1 00:08:12.755 14:10:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:12.755 14:10:04 -- scripts/common.sh@365 -- # decimal 2 00:08:12.755 14:10:04 -- scripts/common.sh@352 -- # local d=2 00:08:12.755 14:10:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.755 14:10:04 -- scripts/common.sh@354 -- # echo 2 00:08:12.755 14:10:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:12.755 14:10:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:12.755 14:10:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:12.755 14:10:04 -- scripts/common.sh@367 -- # return 0 00:08:12.755 14:10:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.755 14:10:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.755 --rc genhtml_branch_coverage=1 00:08:12.755 --rc genhtml_function_coverage=1 00:08:12.755 --rc genhtml_legend=1 00:08:12.755 --rc geninfo_all_blocks=1 00:08:12.755 --rc geninfo_unexecuted_blocks=1 00:08:12.756 00:08:12.756 ' 00:08:12.756 14:10:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.756 --rc genhtml_branch_coverage=1 00:08:12.756 --rc genhtml_function_coverage=1 00:08:12.756 --rc genhtml_legend=1 00:08:12.756 --rc geninfo_all_blocks=1 00:08:12.756 --rc geninfo_unexecuted_blocks=1 00:08:12.756 00:08:12.756 ' 00:08:12.756 14:10:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.756 --rc genhtml_branch_coverage=1 00:08:12.756 --rc genhtml_function_coverage=1 00:08:12.756 --rc genhtml_legend=1 00:08:12.756 --rc geninfo_all_blocks=1 00:08:12.756 --rc geninfo_unexecuted_blocks=1 00:08:12.756 00:08:12.756 ' 00:08:12.756 14:10:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:12.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.756 --rc genhtml_branch_coverage=1 00:08:12.756 --rc genhtml_function_coverage=1 00:08:12.756 --rc genhtml_legend=1 00:08:12.756 --rc geninfo_all_blocks=1 00:08:12.756 --rc geninfo_unexecuted_blocks=1 00:08:12.756 00:08:12.756 ' 00:08:12.756 14:10:04 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:12.756 14:10:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.756 14:10:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.756 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:08:12.756 ************************************ 00:08:12.756 START TEST env_memory 00:08:12.756 ************************************ 00:08:12.756 14:10:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:12.756 00:08:12.756 00:08:12.756 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.756 http://cunit.sourceforge.net/ 00:08:12.756 00:08:12.756 00:08:12.756 Suite: memory 00:08:12.756 Test: alloc and free memory map ...[2024-11-18 14:10:04.787384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:12.756 passed 00:08:13.014 Test: mem map translation ...[2024-11-18 14:10:04.836974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:13.014 [2024-11-18 14:10:04.837262] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:13.014 [2024-11-18 14:10:04.837507] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:13.014 [2024-11-18 14:10:04.837712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:13.014 passed 00:08:13.014 Test: mem map registration ...[2024-11-18 14:10:04.924357] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:13.014 [2024-11-18 14:10:04.924610] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:13.014 passed 00:08:13.014 Test: mem map adjacent registrations ...passed 00:08:13.014 00:08:13.014 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.014 suites 1 1 n/a 0 0 00:08:13.014 tests 4 4 4 0 0 00:08:13.014 asserts 152 152 152 0 n/a 00:08:13.014 00:08:13.014 Elapsed time = 0.287 seconds 00:08:13.014 00:08:13.014 real 0m0.320s 00:08:13.014 user 0m0.298s 00:08:13.014 sys 0m0.021s 00:08:13.014 14:10:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.014 14:10:05 -- common/autotest_common.sh@10 -- # set +x 00:08:13.014 ************************************ 00:08:13.014 END TEST env_memory 00:08:13.014 ************************************ 00:08:13.274 14:10:05 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:13.274 14:10:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.274 14:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.274 14:10:05 -- common/autotest_common.sh@10 -- # set +x 00:08:13.274 ************************************ 00:08:13.274 START TEST env_vtophys 00:08:13.274 ************************************ 00:08:13.274 14:10:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:13.274 EAL: lib.eal log level changed from notice to debug 00:08:13.274 EAL: Detected lcore 0 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 1 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 2 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 3 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 4 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 5 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 6 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 7 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 8 as core 0 on socket 0 00:08:13.274 EAL: Detected lcore 9 as core 0 on socket 0 00:08:13.274 EAL: Maximum logical cores by configuration: 128 00:08:13.274 EAL: Detected CPU lcores: 10 00:08:13.274 EAL: Detected NUMA nodes: 1 00:08:13.274 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:08:13.274 EAL: Checking presence of .so 'librte_eal.so.23' 00:08:13.274 EAL: Checking presence of .so 'librte_eal.so' 00:08:13.274 EAL: Detected static linkage of DPDK 00:08:13.274 EAL: No shared files mode enabled, IPC will be disabled 00:08:13.274 EAL: Selected IOVA mode 'PA' 00:08:13.274 EAL: Probing VFIO support... 00:08:13.274 EAL: IOMMU type 1 (Type 1) is supported 00:08:13.274 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:13.274 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:13.274 EAL: VFIO support initialized 00:08:13.274 EAL: Ask a virtual area of 0x2e000 bytes 00:08:13.274 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:13.274 EAL: Setting up physically contiguous memory... 00:08:13.274 EAL: Setting maximum number of open files to 1048576 00:08:13.274 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:13.274 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:13.274 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.275 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:13.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.275 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:13.275 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:13.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.275 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:13.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.275 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:13.275 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:13.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.275 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:13.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.275 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:13.275 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:13.275 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.275 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:13.275 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.275 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.275 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:13.275 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:13.275 EAL: Hugepages will be freed exactly as allocated. 00:08:13.275 EAL: No shared files mode enabled, IPC is disabled 00:08:13.275 EAL: No shared files mode enabled, IPC is disabled 00:08:13.275 EAL: TSC frequency is ~2200000 KHz 00:08:13.275 EAL: Main lcore 0 is ready (tid=7fb005216a80;cpuset=[0]) 00:08:13.275 EAL: Trying to obtain current memory policy. 00:08:13.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.275 EAL: Restoring previous memory policy: 0 00:08:13.275 EAL: request: mp_malloc_sync 00:08:13.275 EAL: No shared files mode enabled, IPC is disabled 00:08:13.275 EAL: Heap on socket 0 was expanded by 2MB 00:08:13.275 EAL: No shared files mode enabled, IPC is disabled 00:08:13.275 EAL: Mem event callback 'spdk:(nil)' registered 00:08:13.275 00:08:13.275 00:08:13.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.275 http://cunit.sourceforge.net/ 00:08:13.275 00:08:13.275 00:08:13.275 Suite: components_suite 00:08:13.841 Test: vtophys_malloc_test ...passed 00:08:13.841 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:13.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.841 EAL: Restoring previous memory policy: 0 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was expanded by 4MB 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was shrunk by 4MB 00:08:13.841 EAL: Trying to obtain current memory policy. 00:08:13.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.841 EAL: Restoring previous memory policy: 0 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was expanded by 6MB 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was shrunk by 6MB 00:08:13.841 EAL: Trying to obtain current memory policy. 00:08:13.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.841 EAL: Restoring previous memory policy: 0 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was expanded by 10MB 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was shrunk by 10MB 00:08:13.841 EAL: Trying to obtain current memory policy. 00:08:13.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.841 EAL: Restoring previous memory policy: 0 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was expanded by 18MB 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was shrunk by 18MB 00:08:13.841 EAL: Trying to obtain current memory policy. 00:08:13.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.841 EAL: Restoring previous memory policy: 0 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was expanded by 34MB 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was shrunk by 34MB 00:08:13.841 EAL: Trying to obtain current memory policy. 00:08:13.841 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.841 EAL: Restoring previous memory policy: 0 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.841 EAL: request: mp_malloc_sync 00:08:13.841 EAL: No shared files mode enabled, IPC is disabled 00:08:13.841 EAL: Heap on socket 0 was expanded by 66MB 00:08:13.841 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.100 EAL: request: mp_malloc_sync 00:08:14.100 EAL: No shared files mode enabled, IPC is disabled 00:08:14.100 EAL: Heap on socket 0 was shrunk by 66MB 00:08:14.100 EAL: Trying to obtain current memory policy. 00:08:14.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:14.100 EAL: Restoring previous memory policy: 0 00:08:14.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.100 EAL: request: mp_malloc_sync 00:08:14.100 EAL: No shared files mode enabled, IPC is disabled 00:08:14.100 EAL: Heap on socket 0 was expanded by 130MB 00:08:14.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.100 EAL: request: mp_malloc_sync 00:08:14.100 EAL: No shared files mode enabled, IPC is disabled 00:08:14.100 EAL: Heap on socket 0 was shrunk by 130MB 00:08:14.100 EAL: Trying to obtain current memory policy. 00:08:14.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:14.100 EAL: Restoring previous memory policy: 0 00:08:14.100 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.100 EAL: request: mp_malloc_sync 00:08:14.100 EAL: No shared files mode enabled, IPC is disabled 00:08:14.100 EAL: Heap on socket 0 was expanded by 258MB 00:08:14.358 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.358 EAL: request: mp_malloc_sync 00:08:14.358 EAL: No shared files mode enabled, IPC is disabled 00:08:14.358 EAL: Heap on socket 0 was shrunk by 258MB 00:08:14.358 EAL: Trying to obtain current memory policy. 00:08:14.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:14.616 EAL: Restoring previous memory policy: 0 00:08:14.616 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.616 EAL: request: mp_malloc_sync 00:08:14.616 EAL: No shared files mode enabled, IPC is disabled 00:08:14.617 EAL: Heap on socket 0 was expanded by 514MB 00:08:14.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.876 EAL: request: mp_malloc_sync 00:08:14.876 EAL: No shared files mode enabled, IPC is disabled 00:08:14.876 EAL: Heap on socket 0 was shrunk by 514MB 00:08:14.876 EAL: Trying to obtain current memory policy. 00:08:14.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.442 EAL: Restoring previous memory policy: 0 00:08:15.442 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.442 EAL: request: mp_malloc_sync 00:08:15.442 EAL: No shared files mode enabled, IPC is disabled 00:08:15.442 EAL: Heap on socket 0 was expanded by 1026MB 00:08:15.700 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.959 EAL: request: mp_malloc_sync 00:08:15.959 EAL: No shared files mode enabled, IPC is disabled 00:08:15.959 passedEAL: Heap on socket 0 was shrunk by 1026MB 00:08:15.959 00:08:15.959 00:08:15.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.959 suites 1 1 n/a 0 0 00:08:15.959 tests 2 2 2 0 0 00:08:15.959 asserts 6317 6317 6317 0 n/a 00:08:15.959 00:08:15.959 Elapsed time = 2.530 seconds 00:08:15.959 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.959 EAL: request: mp_malloc_sync 00:08:15.959 EAL: No shared files mode enabled, IPC is disabled 00:08:15.959 EAL: Heap on socket 0 was shrunk by 2MB 00:08:15.959 EAL: No shared files mode enabled, IPC is disabled 00:08:15.959 EAL: No shared files mode enabled, IPC is disabled 00:08:15.959 EAL: No shared files mode enabled, IPC is disabled 00:08:15.959 ************************************ 00:08:15.959 END TEST env_vtophys 00:08:15.959 ************************************ 00:08:15.959 00:08:15.959 real 0m2.816s 00:08:15.959 user 0m1.499s 00:08:15.959 sys 0m1.154s 00:08:15.959 14:10:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.959 14:10:07 -- common/autotest_common.sh@10 -- # set +x 00:08:15.959 14:10:07 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:15.959 14:10:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.959 14:10:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.959 14:10:07 -- common/autotest_common.sh@10 -- # set +x 00:08:15.959 ************************************ 00:08:15.959 START TEST env_pci 00:08:15.959 ************************************ 00:08:15.959 14:10:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:15.959 00:08:15.959 00:08:15.959 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.959 http://cunit.sourceforge.net/ 00:08:15.959 00:08:15.959 00:08:15.959 Suite: pci 00:08:15.959 Test: pci_hook ...[2024-11-18 14:10:08.002911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 114416 has claimed it 00:08:15.959 EAL: Cannot find device (10000:00:01.0) 00:08:15.959 EAL: Failed to attach device on primary process 00:08:15.959 passed 00:08:15.959 00:08:15.959 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.959 suites 1 1 n/a 0 0 00:08:15.959 tests 1 1 1 0 0 00:08:15.959 asserts 25 25 25 0 n/a 00:08:15.960 00:08:16.218 Elapsed time = 0.006 seconds 00:08:16.218 00:08:16.218 real 0m0.067s 00:08:16.218 user 0m0.034s 00:08:16.218 sys 0m0.031s 00:08:16.218 14:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.218 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.218 ************************************ 00:08:16.218 END TEST env_pci 00:08:16.218 ************************************ 00:08:16.218 14:10:08 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:16.218 14:10:08 -- env/env.sh@15 -- # uname 00:08:16.218 14:10:08 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:16.218 14:10:08 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:16.218 14:10:08 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:16.218 14:10:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:16.218 14:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.218 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.218 ************************************ 00:08:16.218 START TEST env_dpdk_post_init 00:08:16.218 ************************************ 00:08:16.218 14:10:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:16.218 EAL: Detected CPU lcores: 10 00:08:16.218 EAL: Detected NUMA nodes: 1 00:08:16.218 EAL: Detected static linkage of DPDK 00:08:16.218 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:16.218 EAL: Selected IOVA mode 'PA' 00:08:16.218 EAL: VFIO support initialized 00:08:16.218 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:16.477 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:16.477 Starting DPDK initialization... 00:08:16.477 Starting SPDK post initialization... 00:08:16.477 SPDK NVMe probe 00:08:16.477 Attaching to 0000:00:06.0 00:08:16.477 Attached to 0000:00:06.0 00:08:16.477 Cleaning up... 00:08:16.477 00:08:16.477 real 0m0.241s 00:08:16.477 user 0m0.083s 00:08:16.477 sys 0m0.059s 00:08:16.477 14:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.477 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.477 ************************************ 00:08:16.477 END TEST env_dpdk_post_init 00:08:16.477 ************************************ 00:08:16.477 14:10:08 -- env/env.sh@26 -- # uname 00:08:16.477 14:10:08 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:16.477 14:10:08 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:16.477 14:10:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.477 14:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.477 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.477 ************************************ 00:08:16.477 START TEST env_mem_callbacks 00:08:16.477 ************************************ 00:08:16.477 14:10:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:16.477 EAL: Detected CPU lcores: 10 00:08:16.477 EAL: Detected NUMA nodes: 1 00:08:16.477 EAL: Detected static linkage of DPDK 00:08:16.477 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:16.477 EAL: Selected IOVA mode 'PA' 00:08:16.477 EAL: VFIO support initialized 00:08:16.736 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:16.736 00:08:16.736 00:08:16.736 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.736 http://cunit.sourceforge.net/ 00:08:16.736 00:08:16.736 00:08:16.736 Suite: memory 00:08:16.736 Test: test ... 00:08:16.736 register 0x200000200000 2097152 00:08:16.736 malloc 3145728 00:08:16.736 register 0x200000400000 4194304 00:08:16.736 buf 0x200000500000 len 3145728 PASSED 00:08:16.736 malloc 64 00:08:16.736 buf 0x2000004fff40 len 64 PASSED 00:08:16.736 malloc 4194304 00:08:16.736 register 0x200000800000 6291456 00:08:16.736 buf 0x200000a00000 len 4194304 PASSED 00:08:16.736 free 0x200000500000 3145728 00:08:16.736 free 0x2000004fff40 64 00:08:16.736 unregister 0x200000400000 4194304 PASSED 00:08:16.736 free 0x200000a00000 4194304 00:08:16.736 unregister 0x200000800000 6291456 PASSED 00:08:16.736 malloc 8388608 00:08:16.736 register 0x200000400000 10485760 00:08:16.736 buf 0x200000600000 len 8388608 PASSED 00:08:16.736 free 0x200000600000 8388608 00:08:16.736 unregister 0x200000400000 10485760 PASSED 00:08:16.736 passed 00:08:16.736 00:08:16.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.736 suites 1 1 n/a 0 0 00:08:16.736 tests 1 1 1 0 0 00:08:16.736 asserts 15 15 15 0 n/a 00:08:16.736 00:08:16.736 Elapsed time = 0.008 seconds 00:08:16.736 ************************************ 00:08:16.736 END TEST env_mem_callbacks 00:08:16.736 ************************************ 00:08:16.736 00:08:16.736 real 0m0.195s 00:08:16.736 user 0m0.032s 00:08:16.736 sys 0m0.063s 00:08:16.736 14:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.736 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.736 00:08:16.736 real 0m4.109s 00:08:16.736 user 0m2.215s 00:08:16.736 sys 0m1.516s 00:08:16.736 14:10:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.736 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.736 ************************************ 00:08:16.736 END TEST env 00:08:16.736 ************************************ 00:08:16.736 14:10:08 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:16.736 14:10:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.736 14:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.736 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.736 ************************************ 00:08:16.736 START TEST rpc 00:08:16.736 ************************************ 00:08:16.736 14:10:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:16.736 * Looking for test storage... 00:08:16.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:16.736 14:10:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:16.736 14:10:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:16.736 14:10:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:16.995 14:10:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:16.995 14:10:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:16.995 14:10:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:16.995 14:10:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:16.995 14:10:08 -- scripts/common.sh@335 -- # IFS=.-: 00:08:16.995 14:10:08 -- scripts/common.sh@335 -- # read -ra ver1 00:08:16.995 14:10:08 -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.995 14:10:08 -- scripts/common.sh@336 -- # read -ra ver2 00:08:16.995 14:10:08 -- scripts/common.sh@337 -- # local 'op=<' 00:08:16.995 14:10:08 -- scripts/common.sh@339 -- # ver1_l=2 00:08:16.995 14:10:08 -- scripts/common.sh@340 -- # ver2_l=1 00:08:16.995 14:10:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:16.995 14:10:08 -- scripts/common.sh@343 -- # case "$op" in 00:08:16.995 14:10:08 -- scripts/common.sh@344 -- # : 1 00:08:16.995 14:10:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:16.995 14:10:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.995 14:10:08 -- scripts/common.sh@364 -- # decimal 1 00:08:16.995 14:10:08 -- scripts/common.sh@352 -- # local d=1 00:08:16.995 14:10:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.995 14:10:08 -- scripts/common.sh@354 -- # echo 1 00:08:16.995 14:10:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:16.995 14:10:08 -- scripts/common.sh@365 -- # decimal 2 00:08:16.995 14:10:08 -- scripts/common.sh@352 -- # local d=2 00:08:16.995 14:10:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.995 14:10:08 -- scripts/common.sh@354 -- # echo 2 00:08:16.995 14:10:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:16.995 14:10:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:16.995 14:10:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:16.995 14:10:08 -- scripts/common.sh@367 -- # return 0 00:08:16.995 14:10:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.995 14:10:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 14:10:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 14:10:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 14:10:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:16.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.995 --rc genhtml_branch_coverage=1 00:08:16.995 --rc genhtml_function_coverage=1 00:08:16.995 --rc genhtml_legend=1 00:08:16.995 --rc geninfo_all_blocks=1 00:08:16.995 --rc geninfo_unexecuted_blocks=1 00:08:16.995 00:08:16.995 ' 00:08:16.995 14:10:08 -- rpc/rpc.sh@65 -- # spdk_pid=114554 00:08:16.995 14:10:08 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:16.995 14:10:08 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.995 14:10:08 -- rpc/rpc.sh@67 -- # waitforlisten 114554 00:08:16.995 14:10:08 -- common/autotest_common.sh@829 -- # '[' -z 114554 ']' 00:08:16.995 14:10:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.995 14:10:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.995 14:10:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.995 14:10:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.995 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:08:16.995 [2024-11-18 14:10:08.959956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.995 [2024-11-18 14:10:08.960229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114554 ] 00:08:17.254 [2024-11-18 14:10:09.110691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.254 [2024-11-18 14:10:09.174004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.254 [2024-11-18 14:10:09.174311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:17.254 [2024-11-18 14:10:09.174387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 114554' to capture a snapshot of events at runtime. 00:08:17.254 [2024-11-18 14:10:09.174443] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid114554 for offline analysis/debug. 00:08:17.254 [2024-11-18 14:10:09.174542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.191 14:10:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.191 14:10:09 -- common/autotest_common.sh@862 -- # return 0 00:08:18.191 14:10:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:18.191 14:10:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:18.191 14:10:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:18.191 14:10:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:18.191 14:10:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.191 14:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.191 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:18.191 ************************************ 00:08:18.191 START TEST rpc_integrity 00:08:18.191 ************************************ 00:08:18.191 14:10:09 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:08:18.191 14:10:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.191 14:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.191 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:18.191 14:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.191 14:10:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.191 14:10:09 -- rpc/rpc.sh@13 -- # jq length 00:08:18.191 14:10:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.191 14:10:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.191 14:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.191 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:09 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:18.192 14:10:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.192 14:10:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.192 { 00:08:18.192 "name": "Malloc0", 00:08:18.192 "aliases": [ 00:08:18.192 "815fb046-f652-4ea1-9019-e4b63a1daded" 00:08:18.192 ], 00:08:18.192 "product_name": "Malloc disk", 00:08:18.192 "block_size": 512, 00:08:18.192 "num_blocks": 16384, 00:08:18.192 "uuid": "815fb046-f652-4ea1-9019-e4b63a1daded", 00:08:18.192 "assigned_rate_limits": { 00:08:18.192 "rw_ios_per_sec": 0, 00:08:18.192 "rw_mbytes_per_sec": 0, 00:08:18.192 "r_mbytes_per_sec": 0, 00:08:18.192 "w_mbytes_per_sec": 0 00:08:18.192 }, 00:08:18.192 "claimed": false, 00:08:18.192 "zoned": false, 00:08:18.192 "supported_io_types": { 00:08:18.192 "read": true, 00:08:18.192 "write": true, 00:08:18.192 "unmap": true, 00:08:18.192 "write_zeroes": true, 00:08:18.192 "flush": true, 00:08:18.192 "reset": true, 00:08:18.192 "compare": false, 00:08:18.192 "compare_and_write": false, 00:08:18.192 "abort": true, 00:08:18.192 "nvme_admin": false, 00:08:18.192 "nvme_io": false 00:08:18.192 }, 00:08:18.192 "memory_domains": [ 00:08:18.192 { 00:08:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.192 "dma_device_type": 2 00:08:18.192 } 00:08:18.192 ], 00:08:18.192 "driver_specific": {} 00:08:18.192 } 00:08:18.192 ]' 00:08:18.192 14:10:09 -- rpc/rpc.sh@17 -- # jq length 00:08:18.192 14:10:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:18.192 14:10:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 [2024-11-18 14:10:10.034910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:18.192 [2024-11-18 14:10:10.035045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.192 [2024-11-18 14:10:10.035102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:08:18.192 [2024-11-18 14:10:10.035137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.192 [2024-11-18 14:10:10.037911] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.192 [2024-11-18 14:10:10.038026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:18.192 Passthru0 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:18.192 { 00:08:18.192 "name": "Malloc0", 00:08:18.192 "aliases": [ 00:08:18.192 "815fb046-f652-4ea1-9019-e4b63a1daded" 00:08:18.192 ], 00:08:18.192 "product_name": "Malloc disk", 00:08:18.192 "block_size": 512, 00:08:18.192 "num_blocks": 16384, 00:08:18.192 "uuid": "815fb046-f652-4ea1-9019-e4b63a1daded", 00:08:18.192 "assigned_rate_limits": { 00:08:18.192 "rw_ios_per_sec": 0, 00:08:18.192 "rw_mbytes_per_sec": 0, 00:08:18.192 "r_mbytes_per_sec": 0, 00:08:18.192 "w_mbytes_per_sec": 0 00:08:18.192 }, 00:08:18.192 "claimed": true, 00:08:18.192 "claim_type": "exclusive_write", 00:08:18.192 "zoned": false, 00:08:18.192 "supported_io_types": { 00:08:18.192 "read": true, 00:08:18.192 "write": true, 00:08:18.192 "unmap": true, 00:08:18.192 "write_zeroes": true, 00:08:18.192 "flush": true, 00:08:18.192 "reset": true, 00:08:18.192 "compare": false, 00:08:18.192 "compare_and_write": false, 00:08:18.192 "abort": true, 00:08:18.192 "nvme_admin": false, 00:08:18.192 "nvme_io": false 00:08:18.192 }, 00:08:18.192 "memory_domains": [ 00:08:18.192 { 00:08:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.192 "dma_device_type": 2 00:08:18.192 } 00:08:18.192 ], 00:08:18.192 "driver_specific": {} 00:08:18.192 }, 00:08:18.192 { 00:08:18.192 "name": "Passthru0", 00:08:18.192 "aliases": [ 00:08:18.192 "c58cd4c2-9af5-589a-8dc9-b830a4d7047c" 00:08:18.192 ], 00:08:18.192 "product_name": "passthru", 00:08:18.192 "block_size": 512, 00:08:18.192 "num_blocks": 16384, 00:08:18.192 "uuid": "c58cd4c2-9af5-589a-8dc9-b830a4d7047c", 00:08:18.192 "assigned_rate_limits": { 00:08:18.192 "rw_ios_per_sec": 0, 00:08:18.192 "rw_mbytes_per_sec": 0, 00:08:18.192 "r_mbytes_per_sec": 0, 00:08:18.192 "w_mbytes_per_sec": 0 00:08:18.192 }, 00:08:18.192 "claimed": false, 00:08:18.192 "zoned": false, 00:08:18.192 "supported_io_types": { 00:08:18.192 "read": true, 00:08:18.192 "write": true, 00:08:18.192 "unmap": true, 00:08:18.192 "write_zeroes": true, 00:08:18.192 "flush": true, 00:08:18.192 "reset": true, 00:08:18.192 "compare": false, 00:08:18.192 "compare_and_write": false, 00:08:18.192 "abort": true, 00:08:18.192 "nvme_admin": false, 00:08:18.192 "nvme_io": false 00:08:18.192 }, 00:08:18.192 "memory_domains": [ 00:08:18.192 { 00:08:18.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.192 "dma_device_type": 2 00:08:18.192 } 00:08:18.192 ], 00:08:18.192 "driver_specific": { 00:08:18.192 "passthru": { 00:08:18.192 "name": "Passthru0", 00:08:18.192 "base_bdev_name": "Malloc0" 00:08:18.192 } 00:08:18.192 } 00:08:18.192 } 00:08:18.192 ]' 00:08:18.192 14:10:10 -- rpc/rpc.sh@21 -- # jq length 00:08:18.192 14:10:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:18.192 14:10:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:18.192 14:10:10 -- rpc/rpc.sh@26 -- # jq length 00:08:18.192 14:10:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:18.192 00:08:18.192 real 0m0.259s 00:08:18.192 user 0m0.180s 00:08:18.192 sys 0m0.015s 00:08:18.192 ************************************ 00:08:18.192 END TEST rpc_integrity 00:08:18.192 ************************************ 00:08:18.192 14:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:18.192 14:10:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.192 14:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 ************************************ 00:08:18.192 START TEST rpc_plugins 00:08:18.192 ************************************ 00:08:18.192 14:10:10 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:08:18.192 14:10:10 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:18.192 14:10:10 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:18.192 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.192 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.192 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.192 14:10:10 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:18.192 { 00:08:18.192 "name": "Malloc1", 00:08:18.192 "aliases": [ 00:08:18.192 "b609c8a0-6653-4ad3-a68e-b00bcebb1eb0" 00:08:18.192 ], 00:08:18.192 "product_name": "Malloc disk", 00:08:18.192 "block_size": 4096, 00:08:18.192 "num_blocks": 256, 00:08:18.192 "uuid": "b609c8a0-6653-4ad3-a68e-b00bcebb1eb0", 00:08:18.192 "assigned_rate_limits": { 00:08:18.192 "rw_ios_per_sec": 0, 00:08:18.192 "rw_mbytes_per_sec": 0, 00:08:18.192 "r_mbytes_per_sec": 0, 00:08:18.193 "w_mbytes_per_sec": 0 00:08:18.193 }, 00:08:18.193 "claimed": false, 00:08:18.193 "zoned": false, 00:08:18.193 "supported_io_types": { 00:08:18.193 "read": true, 00:08:18.193 "write": true, 00:08:18.193 "unmap": true, 00:08:18.193 "write_zeroes": true, 00:08:18.193 "flush": true, 00:08:18.193 "reset": true, 00:08:18.193 "compare": false, 00:08:18.193 "compare_and_write": false, 00:08:18.193 "abort": true, 00:08:18.193 "nvme_admin": false, 00:08:18.193 "nvme_io": false 00:08:18.193 }, 00:08:18.193 "memory_domains": [ 00:08:18.193 { 00:08:18.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.193 "dma_device_type": 2 00:08:18.193 } 00:08:18.193 ], 00:08:18.193 "driver_specific": {} 00:08:18.193 } 00:08:18.193 ]' 00:08:18.193 14:10:10 -- rpc/rpc.sh@32 -- # jq length 00:08:18.452 14:10:10 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:18.452 14:10:10 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:18.452 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.452 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.452 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.452 14:10:10 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:18.452 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.452 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.452 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.452 14:10:10 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:18.452 14:10:10 -- rpc/rpc.sh@36 -- # jq length 00:08:18.452 14:10:10 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:18.452 00:08:18.452 real 0m0.127s 00:08:18.452 user 0m0.087s 00:08:18.452 sys 0m0.008s 00:08:18.452 ************************************ 00:08:18.452 END TEST rpc_plugins 00:08:18.452 ************************************ 00:08:18.452 14:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.452 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.452 14:10:10 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:18.452 14:10:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.452 14:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.452 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.452 ************************************ 00:08:18.452 START TEST rpc_trace_cmd_test 00:08:18.452 ************************************ 00:08:18.452 14:10:10 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:08:18.452 14:10:10 -- rpc/rpc.sh@40 -- # local info 00:08:18.452 14:10:10 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:18.452 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.452 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.452 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.452 14:10:10 -- rpc/rpc.sh@42 -- # info='{ 00:08:18.452 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid114554", 00:08:18.452 "tpoint_group_mask": "0x8", 00:08:18.452 "iscsi_conn": { 00:08:18.452 "mask": "0x2", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "scsi": { 00:08:18.452 "mask": "0x4", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "bdev": { 00:08:18.452 "mask": "0x8", 00:08:18.452 "tpoint_mask": "0xffffffffffffffff" 00:08:18.452 }, 00:08:18.452 "nvmf_rdma": { 00:08:18.452 "mask": "0x10", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "nvmf_tcp": { 00:08:18.452 "mask": "0x20", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "ftl": { 00:08:18.452 "mask": "0x40", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "blobfs": { 00:08:18.452 "mask": "0x80", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "dsa": { 00:08:18.452 "mask": "0x200", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "thread": { 00:08:18.452 "mask": "0x400", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "nvme_pcie": { 00:08:18.452 "mask": "0x800", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "iaa": { 00:08:18.452 "mask": "0x1000", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "nvme_tcp": { 00:08:18.452 "mask": "0x2000", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 }, 00:08:18.452 "bdev_nvme": { 00:08:18.452 "mask": "0x4000", 00:08:18.452 "tpoint_mask": "0x0" 00:08:18.452 } 00:08:18.452 }' 00:08:18.452 14:10:10 -- rpc/rpc.sh@43 -- # jq length 00:08:18.452 14:10:10 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:18.452 14:10:10 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:18.452 14:10:10 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:18.452 14:10:10 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:18.452 14:10:10 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:18.452 14:10:10 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:18.711 14:10:10 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:18.711 14:10:10 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:18.711 14:10:10 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:18.711 00:08:18.711 real 0m0.229s 00:08:18.711 user 0m0.205s 00:08:18.711 sys 0m0.018s 00:08:18.711 ************************************ 00:08:18.711 14:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.711 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.711 END TEST rpc_trace_cmd_test 00:08:18.711 ************************************ 00:08:18.711 14:10:10 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:18.711 14:10:10 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:18.711 14:10:10 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:18.711 14:10:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.711 14:10:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.711 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.711 ************************************ 00:08:18.711 START TEST rpc_daemon_integrity 00:08:18.711 ************************************ 00:08:18.711 14:10:10 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:08:18.711 14:10:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:18.711 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.711 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.711 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.711 14:10:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:18.711 14:10:10 -- rpc/rpc.sh@13 -- # jq length 00:08:18.711 14:10:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:18.711 14:10:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:18.711 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.711 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.711 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.711 14:10:10 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:18.711 14:10:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:18.711 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.711 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.711 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.711 14:10:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:18.711 { 00:08:18.711 "name": "Malloc2", 00:08:18.711 "aliases": [ 00:08:18.711 "a32b8880-0b57-49a2-9919-10c025ac665e" 00:08:18.711 ], 00:08:18.711 "product_name": "Malloc disk", 00:08:18.711 "block_size": 512, 00:08:18.711 "num_blocks": 16384, 00:08:18.711 "uuid": "a32b8880-0b57-49a2-9919-10c025ac665e", 00:08:18.711 "assigned_rate_limits": { 00:08:18.711 "rw_ios_per_sec": 0, 00:08:18.711 "rw_mbytes_per_sec": 0, 00:08:18.711 "r_mbytes_per_sec": 0, 00:08:18.711 "w_mbytes_per_sec": 0 00:08:18.711 }, 00:08:18.711 "claimed": false, 00:08:18.711 "zoned": false, 00:08:18.711 "supported_io_types": { 00:08:18.711 "read": true, 00:08:18.711 "write": true, 00:08:18.711 "unmap": true, 00:08:18.711 "write_zeroes": true, 00:08:18.711 "flush": true, 00:08:18.711 "reset": true, 00:08:18.711 "compare": false, 00:08:18.711 "compare_and_write": false, 00:08:18.711 "abort": true, 00:08:18.711 "nvme_admin": false, 00:08:18.711 "nvme_io": false 00:08:18.711 }, 00:08:18.711 "memory_domains": [ 00:08:18.711 { 00:08:18.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.711 "dma_device_type": 2 00:08:18.711 } 00:08:18.711 ], 00:08:18.711 "driver_specific": {} 00:08:18.711 } 00:08:18.711 ]' 00:08:18.711 14:10:10 -- rpc/rpc.sh@17 -- # jq length 00:08:18.971 14:10:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:18.971 14:10:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:18.971 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.971 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.971 [2024-11-18 14:10:10.792811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:18.971 [2024-11-18 14:10:10.792884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.971 [2024-11-18 14:10:10.792926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.971 [2024-11-18 14:10:10.792950] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.971 [2024-11-18 14:10:10.795574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.971 [2024-11-18 14:10:10.795644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:18.971 Passthru0 00:08:18.971 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.971 14:10:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:18.971 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.971 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.971 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.971 14:10:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:18.971 { 00:08:18.971 "name": "Malloc2", 00:08:18.971 "aliases": [ 00:08:18.971 "a32b8880-0b57-49a2-9919-10c025ac665e" 00:08:18.971 ], 00:08:18.971 "product_name": "Malloc disk", 00:08:18.971 "block_size": 512, 00:08:18.971 "num_blocks": 16384, 00:08:18.971 "uuid": "a32b8880-0b57-49a2-9919-10c025ac665e", 00:08:18.971 "assigned_rate_limits": { 00:08:18.971 "rw_ios_per_sec": 0, 00:08:18.971 "rw_mbytes_per_sec": 0, 00:08:18.971 "r_mbytes_per_sec": 0, 00:08:18.971 "w_mbytes_per_sec": 0 00:08:18.971 }, 00:08:18.971 "claimed": true, 00:08:18.971 "claim_type": "exclusive_write", 00:08:18.971 "zoned": false, 00:08:18.971 "supported_io_types": { 00:08:18.971 "read": true, 00:08:18.971 "write": true, 00:08:18.971 "unmap": true, 00:08:18.971 "write_zeroes": true, 00:08:18.971 "flush": true, 00:08:18.971 "reset": true, 00:08:18.971 "compare": false, 00:08:18.971 "compare_and_write": false, 00:08:18.971 "abort": true, 00:08:18.971 "nvme_admin": false, 00:08:18.971 "nvme_io": false 00:08:18.971 }, 00:08:18.971 "memory_domains": [ 00:08:18.971 { 00:08:18.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.971 "dma_device_type": 2 00:08:18.971 } 00:08:18.971 ], 00:08:18.971 "driver_specific": {} 00:08:18.971 }, 00:08:18.971 { 00:08:18.971 "name": "Passthru0", 00:08:18.971 "aliases": [ 00:08:18.971 "62e5da8e-17ab-54e6-8c8c-5385d849471a" 00:08:18.971 ], 00:08:18.971 "product_name": "passthru", 00:08:18.971 "block_size": 512, 00:08:18.971 "num_blocks": 16384, 00:08:18.971 "uuid": "62e5da8e-17ab-54e6-8c8c-5385d849471a", 00:08:18.971 "assigned_rate_limits": { 00:08:18.971 "rw_ios_per_sec": 0, 00:08:18.971 "rw_mbytes_per_sec": 0, 00:08:18.971 "r_mbytes_per_sec": 0, 00:08:18.971 "w_mbytes_per_sec": 0 00:08:18.971 }, 00:08:18.971 "claimed": false, 00:08:18.971 "zoned": false, 00:08:18.971 "supported_io_types": { 00:08:18.971 "read": true, 00:08:18.971 "write": true, 00:08:18.971 "unmap": true, 00:08:18.971 "write_zeroes": true, 00:08:18.971 "flush": true, 00:08:18.971 "reset": true, 00:08:18.971 "compare": false, 00:08:18.971 "compare_and_write": false, 00:08:18.971 "abort": true, 00:08:18.971 "nvme_admin": false, 00:08:18.971 "nvme_io": false 00:08:18.971 }, 00:08:18.971 "memory_domains": [ 00:08:18.971 { 00:08:18.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.971 "dma_device_type": 2 00:08:18.971 } 00:08:18.971 ], 00:08:18.971 "driver_specific": { 00:08:18.971 "passthru": { 00:08:18.971 "name": "Passthru0", 00:08:18.971 "base_bdev_name": "Malloc2" 00:08:18.971 } 00:08:18.971 } 00:08:18.971 } 00:08:18.971 ]' 00:08:18.971 14:10:10 -- rpc/rpc.sh@21 -- # jq length 00:08:18.971 14:10:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:18.971 14:10:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:18.971 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.971 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.971 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.971 14:10:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:18.971 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.971 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.971 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.971 14:10:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:18.971 14:10:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.971 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.971 14:10:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.971 14:10:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:18.971 14:10:10 -- rpc/rpc.sh@26 -- # jq length 00:08:18.971 14:10:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:18.971 00:08:18.971 real 0m0.274s 00:08:18.971 user 0m0.192s 00:08:18.971 sys 0m0.025s 00:08:18.971 ************************************ 00:08:18.971 END TEST rpc_daemon_integrity 00:08:18.971 ************************************ 00:08:18.971 14:10:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.971 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:08:18.971 14:10:10 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:18.971 14:10:10 -- rpc/rpc.sh@84 -- # killprocess 114554 00:08:18.971 14:10:10 -- common/autotest_common.sh@936 -- # '[' -z 114554 ']' 00:08:18.971 14:10:10 -- common/autotest_common.sh@940 -- # kill -0 114554 00:08:18.971 14:10:10 -- common/autotest_common.sh@941 -- # uname 00:08:18.971 14:10:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:18.971 14:10:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 114554 00:08:18.971 14:10:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:18.971 killing process with pid 114554 00:08:18.971 14:10:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:18.971 14:10:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 114554' 00:08:18.971 14:10:11 -- common/autotest_common.sh@955 -- # kill 114554 00:08:18.971 14:10:11 -- common/autotest_common.sh@960 -- # wait 114554 00:08:19.555 00:08:19.555 real 0m2.755s 00:08:19.555 user 0m3.414s 00:08:19.555 sys 0m0.648s 00:08:19.555 14:10:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.555 ************************************ 00:08:19.555 END TEST rpc 00:08:19.555 ************************************ 00:08:19.555 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:19.555 14:10:11 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:19.555 14:10:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.555 14:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.555 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:19.556 ************************************ 00:08:19.556 START TEST rpc_client 00:08:19.556 ************************************ 00:08:19.556 14:10:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:19.556 * Looking for test storage... 00:08:19.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:19.556 14:10:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:19.556 14:10:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:19.556 14:10:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:19.850 14:10:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:19.850 14:10:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:19.850 14:10:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:19.850 14:10:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:19.850 14:10:11 -- scripts/common.sh@335 -- # IFS=.-: 00:08:19.850 14:10:11 -- scripts/common.sh@335 -- # read -ra ver1 00:08:19.850 14:10:11 -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.850 14:10:11 -- scripts/common.sh@336 -- # read -ra ver2 00:08:19.850 14:10:11 -- scripts/common.sh@337 -- # local 'op=<' 00:08:19.850 14:10:11 -- scripts/common.sh@339 -- # ver1_l=2 00:08:19.850 14:10:11 -- scripts/common.sh@340 -- # ver2_l=1 00:08:19.850 14:10:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:19.850 14:10:11 -- scripts/common.sh@343 -- # case "$op" in 00:08:19.850 14:10:11 -- scripts/common.sh@344 -- # : 1 00:08:19.850 14:10:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:19.850 14:10:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.850 14:10:11 -- scripts/common.sh@364 -- # decimal 1 00:08:19.850 14:10:11 -- scripts/common.sh@352 -- # local d=1 00:08:19.850 14:10:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.850 14:10:11 -- scripts/common.sh@354 -- # echo 1 00:08:19.850 14:10:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:19.850 14:10:11 -- scripts/common.sh@365 -- # decimal 2 00:08:19.850 14:10:11 -- scripts/common.sh@352 -- # local d=2 00:08:19.850 14:10:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.850 14:10:11 -- scripts/common.sh@354 -- # echo 2 00:08:19.850 14:10:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:19.850 14:10:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:19.850 14:10:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:19.850 14:10:11 -- scripts/common.sh@367 -- # return 0 00:08:19.850 14:10:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.850 14:10:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:19.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.850 --rc genhtml_branch_coverage=1 00:08:19.850 --rc genhtml_function_coverage=1 00:08:19.850 --rc genhtml_legend=1 00:08:19.850 --rc geninfo_all_blocks=1 00:08:19.850 --rc geninfo_unexecuted_blocks=1 00:08:19.850 00:08:19.850 ' 00:08:19.850 14:10:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:19.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.850 --rc genhtml_branch_coverage=1 00:08:19.850 --rc genhtml_function_coverage=1 00:08:19.850 --rc genhtml_legend=1 00:08:19.850 --rc geninfo_all_blocks=1 00:08:19.850 --rc geninfo_unexecuted_blocks=1 00:08:19.850 00:08:19.850 ' 00:08:19.850 14:10:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:19.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.850 --rc genhtml_branch_coverage=1 00:08:19.850 --rc genhtml_function_coverage=1 00:08:19.850 --rc genhtml_legend=1 00:08:19.850 --rc geninfo_all_blocks=1 00:08:19.850 --rc geninfo_unexecuted_blocks=1 00:08:19.850 00:08:19.850 ' 00:08:19.850 14:10:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:19.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.850 --rc genhtml_branch_coverage=1 00:08:19.850 --rc genhtml_function_coverage=1 00:08:19.850 --rc genhtml_legend=1 00:08:19.850 --rc geninfo_all_blocks=1 00:08:19.850 --rc geninfo_unexecuted_blocks=1 00:08:19.850 00:08:19.850 ' 00:08:19.850 14:10:11 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:19.850 OK 00:08:19.850 14:10:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:19.850 00:08:19.850 real 0m0.216s 00:08:19.850 user 0m0.162s 00:08:19.850 sys 0m0.073s 00:08:19.850 14:10:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.850 ************************************ 00:08:19.850 END TEST rpc_client 00:08:19.850 ************************************ 00:08:19.850 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:19.850 14:10:11 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:19.850 14:10:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.850 14:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.850 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:19.850 ************************************ 00:08:19.850 START TEST json_config 00:08:19.850 ************************************ 00:08:19.851 14:10:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:19.851 14:10:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:19.851 14:10:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:19.851 14:10:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:19.851 14:10:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:19.851 14:10:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:19.851 14:10:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:19.851 14:10:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:19.851 14:10:11 -- scripts/common.sh@335 -- # IFS=.-: 00:08:19.851 14:10:11 -- scripts/common.sh@335 -- # read -ra ver1 00:08:19.851 14:10:11 -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.851 14:10:11 -- scripts/common.sh@336 -- # read -ra ver2 00:08:19.851 14:10:11 -- scripts/common.sh@337 -- # local 'op=<' 00:08:19.851 14:10:11 -- scripts/common.sh@339 -- # ver1_l=2 00:08:19.851 14:10:11 -- scripts/common.sh@340 -- # ver2_l=1 00:08:19.851 14:10:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:19.851 14:10:11 -- scripts/common.sh@343 -- # case "$op" in 00:08:19.851 14:10:11 -- scripts/common.sh@344 -- # : 1 00:08:19.851 14:10:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:19.851 14:10:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.851 14:10:11 -- scripts/common.sh@364 -- # decimal 1 00:08:19.851 14:10:11 -- scripts/common.sh@352 -- # local d=1 00:08:19.851 14:10:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.851 14:10:11 -- scripts/common.sh@354 -- # echo 1 00:08:19.851 14:10:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:19.851 14:10:11 -- scripts/common.sh@365 -- # decimal 2 00:08:20.117 14:10:11 -- scripts/common.sh@352 -- # local d=2 00:08:20.117 14:10:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.117 14:10:11 -- scripts/common.sh@354 -- # echo 2 00:08:20.117 14:10:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.117 14:10:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.117 14:10:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.117 14:10:11 -- scripts/common.sh@367 -- # return 0 00:08:20.117 14:10:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.117 14:10:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.117 --rc genhtml_branch_coverage=1 00:08:20.117 --rc genhtml_function_coverage=1 00:08:20.117 --rc genhtml_legend=1 00:08:20.117 --rc geninfo_all_blocks=1 00:08:20.117 --rc geninfo_unexecuted_blocks=1 00:08:20.117 00:08:20.117 ' 00:08:20.117 14:10:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.117 --rc genhtml_branch_coverage=1 00:08:20.117 --rc genhtml_function_coverage=1 00:08:20.117 --rc genhtml_legend=1 00:08:20.117 --rc geninfo_all_blocks=1 00:08:20.117 --rc geninfo_unexecuted_blocks=1 00:08:20.117 00:08:20.117 ' 00:08:20.117 14:10:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.117 --rc genhtml_branch_coverage=1 00:08:20.117 --rc genhtml_function_coverage=1 00:08:20.117 --rc genhtml_legend=1 00:08:20.117 --rc geninfo_all_blocks=1 00:08:20.117 --rc geninfo_unexecuted_blocks=1 00:08:20.117 00:08:20.117 ' 00:08:20.117 14:10:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.117 --rc genhtml_branch_coverage=1 00:08:20.117 --rc genhtml_function_coverage=1 00:08:20.117 --rc genhtml_legend=1 00:08:20.117 --rc geninfo_all_blocks=1 00:08:20.117 --rc geninfo_unexecuted_blocks=1 00:08:20.117 00:08:20.117 ' 00:08:20.117 14:10:11 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:20.117 14:10:11 -- nvmf/common.sh@7 -- # uname -s 00:08:20.117 14:10:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.117 14:10:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.117 14:10:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.117 14:10:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.117 14:10:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.117 14:10:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.117 14:10:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.117 14:10:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.117 14:10:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.117 14:10:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.117 14:10:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae123072-5ab5-418f-9071-43a470b76a53 00:08:20.117 14:10:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae123072-5ab5-418f-9071-43a470b76a53 00:08:20.117 14:10:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.117 14:10:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.117 14:10:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:20.117 14:10:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.117 14:10:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.117 14:10:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.117 14:10:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.117 14:10:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.117 14:10:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.117 14:10:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.117 14:10:11 -- paths/export.sh@5 -- # export PATH 00:08:20.117 14:10:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:20.117 14:10:11 -- nvmf/common.sh@46 -- # : 0 00:08:20.117 14:10:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:20.117 14:10:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:20.117 14:10:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:20.117 14:10:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.117 14:10:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.117 14:10:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:20.117 14:10:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:20.117 14:10:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:20.118 14:10:11 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:20.118 14:10:11 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:08:20.118 14:10:11 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:20.118 14:10:11 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:20.118 14:10:11 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:20.118 14:10:11 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:20.118 14:10:11 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:20.118 14:10:11 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:08:20.118 14:10:11 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:20.118 14:10:11 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:20.118 14:10:11 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:20.118 INFO: JSON configuration test init 00:08:20.118 14:10:11 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:20.118 14:10:11 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:20.118 14:10:11 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:20.118 14:10:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.118 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:20.118 14:10:11 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:20.118 14:10:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.118 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:20.118 14:10:11 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:20.118 14:10:11 -- json_config/json_config.sh@98 -- # local app=target 00:08:20.118 14:10:11 -- json_config/json_config.sh@99 -- # shift 00:08:20.118 14:10:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:20.118 14:10:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:20.118 14:10:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=114840 00:08:20.118 14:10:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:20.118 Waiting for target to run... 00:08:20.118 14:10:11 -- json_config/json_config.sh@114 -- # waitforlisten 114840 /var/tmp/spdk_tgt.sock 00:08:20.118 14:10:11 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:20.118 14:10:11 -- common/autotest_common.sh@829 -- # '[' -z 114840 ']' 00:08:20.118 14:10:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:20.118 14:10:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:20.118 14:10:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:20.118 14:10:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.118 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:08:20.118 [2024-11-18 14:10:12.018011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.118 [2024-11-18 14:10:12.018275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114840 ] 00:08:20.685 [2024-11-18 14:10:12.578533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.685 [2024-11-18 14:10:12.652113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.685 [2024-11-18 14:10:12.652422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.944 14:10:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.944 14:10:12 -- common/autotest_common.sh@862 -- # return 0 00:08:20.944 00:08:20.944 14:10:12 -- json_config/json_config.sh@115 -- # echo '' 00:08:20.944 14:10:12 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:20.944 14:10:12 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:20.944 14:10:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:20.944 14:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:20.944 14:10:12 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:20.944 14:10:12 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:20.944 14:10:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.944 14:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:20.944 14:10:13 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:20.944 14:10:13 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:20.944 14:10:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:21.512 14:10:13 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:21.512 14:10:13 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:21.512 14:10:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.512 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:21.512 14:10:13 -- json_config/json_config.sh@48 -- # local ret=0 00:08:21.512 14:10:13 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:21.512 14:10:13 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:21.513 14:10:13 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:21.513 14:10:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:21.513 14:10:13 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:21.772 14:10:13 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:21.772 14:10:13 -- json_config/json_config.sh@51 -- # local get_types 00:08:21.772 14:10:13 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:21.772 14:10:13 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:21.772 14:10:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.772 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:21.772 14:10:13 -- json_config/json_config.sh@58 -- # return 0 00:08:21.772 14:10:13 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:21.772 14:10:13 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:21.772 14:10:13 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:21.772 14:10:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.772 14:10:13 -- common/autotest_common.sh@10 -- # set +x 00:08:21.772 14:10:13 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:21.772 14:10:13 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:21.772 14:10:13 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:21.772 14:10:13 -- json_config/json_config.sh@164 -- # get_notifications 00:08:21.772 14:10:13 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:21.772 14:10:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:21.772 14:10:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:21.772 14:10:13 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:21.772 14:10:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:21.772 14:10:13 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:22.031 14:10:13 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:22.031 14:10:13 -- json_config/json_config.sh@64 -- # IFS=: 00:08:22.031 14:10:13 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:22.031 14:10:13 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:22.031 14:10:13 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:22.031 14:10:13 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:22.031 14:10:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:22.290 Nvme0n1p0 Nvme0n1p1 00:08:22.290 14:10:14 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:22.290 14:10:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:22.549 [2024-11-18 14:10:14.519261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:22.549 [2024-11-18 14:10:14.519454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:22.549 00:08:22.549 14:10:14 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:22.549 14:10:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:22.809 Malloc3 00:08:22.809 14:10:14 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:22.809 14:10:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:23.068 [2024-11-18 14:10:15.087568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:23.068 [2024-11-18 14:10:15.087720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.068 [2024-11-18 14:10:15.087787] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:08:23.068 [2024-11-18 14:10:15.087837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.068 [2024-11-18 14:10:15.090785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.068 [2024-11-18 14:10:15.090918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:23.068 PTBdevFromMalloc3 00:08:23.068 14:10:15 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:23.068 14:10:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:23.327 Null0 00:08:23.327 14:10:15 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:23.327 14:10:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:23.586 Malloc0 00:08:23.586 14:10:15 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:23.586 14:10:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:23.845 Malloc1 00:08:23.845 14:10:15 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:23.845 14:10:15 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:24.104 102400+0 records in 00:08:24.104 102400+0 records out 00:08:24.104 104857600 bytes (105 MB, 100 MiB) copied, 0.304682 s, 344 MB/s 00:08:24.104 14:10:16 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:24.104 14:10:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:24.363 aio_disk 00:08:24.363 14:10:16 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:24.363 14:10:16 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:24.363 14:10:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:24.622 d72bb493-a9e1-4d73-8134-d722774a3b53 00:08:24.622 14:10:16 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:24.622 14:10:16 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:24.622 14:10:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:24.881 14:10:16 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:24.881 14:10:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:25.140 14:10:17 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:25.140 14:10:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:25.399 14:10:17 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:25.399 14:10:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:25.658 14:10:17 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:25.658 14:10:17 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:25.658 14:10:17 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8d369a43-491f-4c58-94ef-e7fe80d36c25 bdev_register:f6900344-da2a-4e90-819b-954ea3125e0f bdev_register:bbaf731e-7d83-490c-bb64-5c673a13b08f bdev_register:9d4b5f52-95f3-4c36-8148-8850d160c0b1 00:08:25.658 14:10:17 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:25.658 14:10:17 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:25.658 14:10:17 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:25.658 14:10:17 -- json_config/json_config.sh@74 -- # sort 00:08:25.658 14:10:17 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8d369a43-491f-4c58-94ef-e7fe80d36c25 bdev_register:f6900344-da2a-4e90-819b-954ea3125e0f bdev_register:bbaf731e-7d83-490c-bb64-5c673a13b08f bdev_register:9d4b5f52-95f3-4c36-8148-8850d160c0b1 00:08:25.658 14:10:17 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:25.658 14:10:17 -- json_config/json_config.sh@75 -- # get_notifications 00:08:25.658 14:10:17 -- json_config/json_config.sh@75 -- # sort 00:08:25.658 14:10:17 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:25.658 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.658 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.658 14:10:17 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:25.658 14:10:17 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:25.658 14:10:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:8d369a43-491f-4c58-94ef-e7fe80d36c25 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:f6900344-da2a-4e90-819b-954ea3125e0f 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:bbaf731e-7d83-490c-bb64-5c673a13b08f 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.917 14:10:17 -- json_config/json_config.sh@65 -- # echo bdev_register:9d4b5f52-95f3-4c36-8148-8850d160c0b1 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # IFS=: 00:08:25.917 14:10:17 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:25.918 14:10:17 -- json_config/json_config.sh@77 -- # [[ bdev_register:8d369a43-491f-4c58-94ef-e7fe80d36c25 bdev_register:9d4b5f52-95f3-4c36-8148-8850d160c0b1 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:bbaf731e-7d83-490c-bb64-5c673a13b08f bdev_register:f6900344-da2a-4e90-819b-954ea3125e0f != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\d\3\6\9\a\4\3\-\4\9\1\f\-\4\c\5\8\-\9\4\e\f\-\e\7\f\e\8\0\d\3\6\c\2\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\d\4\b\5\f\5\2\-\9\5\f\3\-\4\c\3\6\-\8\1\4\8\-\8\8\5\0\d\1\6\0\c\0\b\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\b\a\f\7\3\1\e\-\7\d\8\3\-\4\9\0\c\-\b\b\6\4\-\5\c\6\7\3\a\1\3\b\0\8\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\6\9\0\0\3\4\4\-\d\a\2\a\-\4\e\9\0\-\8\1\9\b\-\9\5\4\e\a\3\1\2\5\e\0\f ]] 00:08:25.918 14:10:17 -- json_config/json_config.sh@89 -- # cat 00:08:25.918 14:10:17 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:8d369a43-491f-4c58-94ef-e7fe80d36c25 bdev_register:9d4b5f52-95f3-4c36-8148-8850d160c0b1 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:bbaf731e-7d83-490c-bb64-5c673a13b08f bdev_register:f6900344-da2a-4e90-819b-954ea3125e0f 00:08:25.918 Expected events matched: 00:08:25.918 bdev_register:8d369a43-491f-4c58-94ef-e7fe80d36c25 00:08:25.918 bdev_register:9d4b5f52-95f3-4c36-8148-8850d160c0b1 00:08:25.918 bdev_register:Malloc0 00:08:25.918 bdev_register:Malloc0p0 00:08:25.918 bdev_register:Malloc0p1 00:08:25.918 bdev_register:Malloc0p2 00:08:25.918 bdev_register:Malloc1 00:08:25.918 bdev_register:Malloc3 00:08:25.918 bdev_register:Null0 00:08:25.918 bdev_register:Nvme0n1 00:08:25.918 bdev_register:Nvme0n1p0 00:08:25.918 bdev_register:Nvme0n1p1 00:08:25.918 bdev_register:PTBdevFromMalloc3 00:08:25.918 bdev_register:aio_disk 00:08:25.918 bdev_register:bbaf731e-7d83-490c-bb64-5c673a13b08f 00:08:25.918 bdev_register:f6900344-da2a-4e90-819b-954ea3125e0f 00:08:25.918 14:10:17 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:25.918 14:10:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.918 14:10:17 -- common/autotest_common.sh@10 -- # set +x 00:08:25.918 14:10:17 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:25.918 14:10:17 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:25.918 14:10:17 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:25.918 14:10:17 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:25.918 14:10:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.918 14:10:17 -- common/autotest_common.sh@10 -- # set +x 00:08:26.177 14:10:18 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:26.177 14:10:18 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:26.177 14:10:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:26.177 MallocBdevForConfigChangeCheck 00:08:26.177 14:10:18 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:26.177 14:10:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.177 14:10:18 -- common/autotest_common.sh@10 -- # set +x 00:08:26.177 14:10:18 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:26.177 14:10:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:26.745 INFO: shutting down applications... 00:08:26.745 14:10:18 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:26.745 14:10:18 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:26.745 14:10:18 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:26.745 14:10:18 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:26.745 14:10:18 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:26.745 [2024-11-18 14:10:18.696719] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:27.003 Calling clear_vhost_scsi_subsystem 00:08:27.003 Calling clear_iscsi_subsystem 00:08:27.003 Calling clear_vhost_blk_subsystem 00:08:27.003 Calling clear_nbd_subsystem 00:08:27.003 Calling clear_nvmf_subsystem 00:08:27.003 Calling clear_bdev_subsystem 00:08:27.003 Calling clear_accel_subsystem 00:08:27.003 Calling clear_iobuf_subsystem 00:08:27.003 Calling clear_sock_subsystem 00:08:27.003 Calling clear_vmd_subsystem 00:08:27.003 Calling clear_scheduler_subsystem 00:08:27.003 14:10:18 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:27.003 14:10:18 -- json_config/json_config.sh@396 -- # count=100 00:08:27.003 14:10:18 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:27.003 14:10:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:27.003 14:10:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:27.003 14:10:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:27.260 14:10:19 -- json_config/json_config.sh@398 -- # break 00:08:27.260 14:10:19 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:27.260 14:10:19 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:27.260 14:10:19 -- json_config/json_config.sh@120 -- # local app=target 00:08:27.260 14:10:19 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:27.260 14:10:19 -- json_config/json_config.sh@124 -- # [[ -n 114840 ]] 00:08:27.260 14:10:19 -- json_config/json_config.sh@127 -- # kill -SIGINT 114840 00:08:27.260 14:10:19 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:27.260 14:10:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:27.260 14:10:19 -- json_config/json_config.sh@130 -- # kill -0 114840 00:08:27.260 14:10:19 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:27.824 SPDK target shutdown done 00:08:27.824 INFO: relaunching applications... 00:08:27.824 14:10:19 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:27.824 14:10:19 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:27.825 14:10:19 -- json_config/json_config.sh@130 -- # kill -0 114840 00:08:27.825 14:10:19 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:27.825 14:10:19 -- json_config/json_config.sh@132 -- # break 00:08:27.825 14:10:19 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:27.825 14:10:19 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:27.825 14:10:19 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:27.825 14:10:19 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:27.825 14:10:19 -- json_config/json_config.sh@98 -- # local app=target 00:08:27.825 14:10:19 -- json_config/json_config.sh@99 -- # shift 00:08:27.825 14:10:19 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:27.825 Waiting for target to run... 00:08:27.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:27.825 14:10:19 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:27.825 14:10:19 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:27.825 14:10:19 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:27.825 14:10:19 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:27.825 14:10:19 -- json_config/json_config.sh@111 -- # app_pid[$app]=115093 00:08:27.825 14:10:19 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:27.825 14:10:19 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:27.825 14:10:19 -- json_config/json_config.sh@114 -- # waitforlisten 115093 /var/tmp/spdk_tgt.sock 00:08:27.825 14:10:19 -- common/autotest_common.sh@829 -- # '[' -z 115093 ']' 00:08:27.825 14:10:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:27.825 14:10:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.825 14:10:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:27.825 14:10:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.825 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:08:27.825 [2024-11-18 14:10:19.726470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.825 [2024-11-18 14:10:19.727017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115093 ] 00:08:28.391 [2024-11-18 14:10:20.250819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.391 [2024-11-18 14:10:20.315754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.391 [2024-11-18 14:10:20.316286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.650 [2024-11-18 14:10:20.467961] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:28.650 [2024-11-18 14:10:20.468299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:28.650 [2024-11-18 14:10:20.475914] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:28.650 [2024-11-18 14:10:20.476088] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:28.650 [2024-11-18 14:10:20.483959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:28.650 [2024-11-18 14:10:20.484145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:28.650 [2024-11-18 14:10:20.484292] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:28.650 [2024-11-18 14:10:20.568155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:28.650 [2024-11-18 14:10:20.568378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.650 [2024-11-18 14:10:20.568551] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:28.650 [2024-11-18 14:10:20.568715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.650 [2024-11-18 14:10:20.569399] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.650 [2024-11-18 14:10:20.569555] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:29.587 00:08:29.587 INFO: Checking if target configuration is the same... 00:08:29.587 14:10:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.587 14:10:21 -- common/autotest_common.sh@862 -- # return 0 00:08:29.587 14:10:21 -- json_config/json_config.sh@115 -- # echo '' 00:08:29.587 14:10:21 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:29.587 14:10:21 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:29.587 14:10:21 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:29.587 14:10:21 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:29.587 14:10:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:29.587 + '[' 2 -ne 2 ']' 00:08:29.587 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:29.587 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:29.587 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:29.587 +++ basename /dev/fd/62 00:08:29.587 ++ mktemp /tmp/62.XXX 00:08:29.587 + tmp_file_1=/tmp/62.60B 00:08:29.587 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:29.587 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:29.587 + tmp_file_2=/tmp/spdk_tgt_config.json.IHa 00:08:29.587 + ret=0 00:08:29.587 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:29.587 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:29.846 + diff -u /tmp/62.60B /tmp/spdk_tgt_config.json.IHa 00:08:29.846 INFO: JSON config files are the same 00:08:29.846 + echo 'INFO: JSON config files are the same' 00:08:29.846 + rm /tmp/62.60B /tmp/spdk_tgt_config.json.IHa 00:08:29.846 + exit 0 00:08:29.846 INFO: changing configuration and checking if this can be detected... 00:08:29.846 14:10:21 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:29.847 14:10:21 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:29.847 14:10:21 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:29.847 14:10:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:29.847 14:10:21 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:29.847 14:10:21 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:29.847 14:10:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:29.847 + '[' 2 -ne 2 ']' 00:08:29.847 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:29.847 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:29.847 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:29.847 +++ basename /dev/fd/62 00:08:29.847 ++ mktemp /tmp/62.XXX 00:08:30.106 + tmp_file_1=/tmp/62.tXT 00:08:30.106 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:30.106 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:30.106 + tmp_file_2=/tmp/spdk_tgt_config.json.ITd 00:08:30.106 + ret=0 00:08:30.106 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:30.366 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:30.366 + diff -u /tmp/62.tXT /tmp/spdk_tgt_config.json.ITd 00:08:30.366 + ret=1 00:08:30.366 + echo '=== Start of file: /tmp/62.tXT ===' 00:08:30.366 + cat /tmp/62.tXT 00:08:30.366 + echo '=== End of file: /tmp/62.tXT ===' 00:08:30.366 + echo '' 00:08:30.366 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ITd ===' 00:08:30.366 + cat /tmp/spdk_tgt_config.json.ITd 00:08:30.366 + echo '=== End of file: /tmp/spdk_tgt_config.json.ITd ===' 00:08:30.366 + echo '' 00:08:30.366 + rm /tmp/62.tXT /tmp/spdk_tgt_config.json.ITd 00:08:30.366 + exit 1 00:08:30.366 14:10:22 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:30.366 INFO: configuration change detected. 00:08:30.366 14:10:22 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:30.366 14:10:22 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:30.366 14:10:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.366 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:08:30.366 14:10:22 -- json_config/json_config.sh@360 -- # local ret=0 00:08:30.366 14:10:22 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:30.366 14:10:22 -- json_config/json_config.sh@370 -- # [[ -n 115093 ]] 00:08:30.366 14:10:22 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:30.366 14:10:22 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:30.366 14:10:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.366 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:08:30.366 14:10:22 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:30.366 14:10:22 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:30.366 14:10:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:30.625 14:10:22 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:30.625 14:10:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:30.625 14:10:22 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:30.625 14:10:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:30.884 14:10:22 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:30.884 14:10:22 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:31.143 14:10:23 -- json_config/json_config.sh@246 -- # uname -s 00:08:31.143 14:10:23 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:31.143 14:10:23 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:31.143 14:10:23 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:31.143 14:10:23 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:31.143 14:10:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.143 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:31.143 14:10:23 -- json_config/json_config.sh@376 -- # killprocess 115093 00:08:31.143 14:10:23 -- common/autotest_common.sh@936 -- # '[' -z 115093 ']' 00:08:31.143 14:10:23 -- common/autotest_common.sh@940 -- # kill -0 115093 00:08:31.143 14:10:23 -- common/autotest_common.sh@941 -- # uname 00:08:31.143 14:10:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:31.143 14:10:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115093 00:08:31.143 14:10:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:31.143 14:10:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:31.143 14:10:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115093' 00:08:31.143 killing process with pid 115093 00:08:31.143 14:10:23 -- common/autotest_common.sh@955 -- # kill 115093 00:08:31.143 14:10:23 -- common/autotest_common.sh@960 -- # wait 115093 00:08:31.711 14:10:23 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:31.711 14:10:23 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:31.711 14:10:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.711 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:31.711 14:10:23 -- json_config/json_config.sh@381 -- # return 0 00:08:31.711 14:10:23 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:31.711 INFO: Success 00:08:31.711 ************************************ 00:08:31.711 END TEST json_config 00:08:31.711 ************************************ 00:08:31.711 00:08:31.711 real 0m11.872s 00:08:31.711 user 0m17.503s 00:08:31.711 sys 0m2.581s 00:08:31.711 14:10:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:31.711 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:31.711 14:10:23 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:31.711 14:10:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:31.711 14:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.711 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:31.711 ************************************ 00:08:31.711 START TEST json_config_extra_key 00:08:31.711 ************************************ 00:08:31.711 14:10:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:31.711 14:10:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:31.711 14:10:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:31.711 14:10:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:31.970 14:10:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:31.970 14:10:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:31.970 14:10:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:31.970 14:10:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:31.970 14:10:23 -- scripts/common.sh@335 -- # IFS=.-: 00:08:31.970 14:10:23 -- scripts/common.sh@335 -- # read -ra ver1 00:08:31.970 14:10:23 -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.970 14:10:23 -- scripts/common.sh@336 -- # read -ra ver2 00:08:31.970 14:10:23 -- scripts/common.sh@337 -- # local 'op=<' 00:08:31.970 14:10:23 -- scripts/common.sh@339 -- # ver1_l=2 00:08:31.970 14:10:23 -- scripts/common.sh@340 -- # ver2_l=1 00:08:31.970 14:10:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:31.970 14:10:23 -- scripts/common.sh@343 -- # case "$op" in 00:08:31.970 14:10:23 -- scripts/common.sh@344 -- # : 1 00:08:31.970 14:10:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:31.970 14:10:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.970 14:10:23 -- scripts/common.sh@364 -- # decimal 1 00:08:31.970 14:10:23 -- scripts/common.sh@352 -- # local d=1 00:08:31.970 14:10:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.970 14:10:23 -- scripts/common.sh@354 -- # echo 1 00:08:31.970 14:10:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:31.970 14:10:23 -- scripts/common.sh@365 -- # decimal 2 00:08:31.970 14:10:23 -- scripts/common.sh@352 -- # local d=2 00:08:31.970 14:10:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.970 14:10:23 -- scripts/common.sh@354 -- # echo 2 00:08:31.970 14:10:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:31.970 14:10:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:31.970 14:10:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:31.970 14:10:23 -- scripts/common.sh@367 -- # return 0 00:08:31.970 14:10:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.971 14:10:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.971 --rc genhtml_branch_coverage=1 00:08:31.971 --rc genhtml_function_coverage=1 00:08:31.971 --rc genhtml_legend=1 00:08:31.971 --rc geninfo_all_blocks=1 00:08:31.971 --rc geninfo_unexecuted_blocks=1 00:08:31.971 00:08:31.971 ' 00:08:31.971 14:10:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.971 --rc genhtml_branch_coverage=1 00:08:31.971 --rc genhtml_function_coverage=1 00:08:31.971 --rc genhtml_legend=1 00:08:31.971 --rc geninfo_all_blocks=1 00:08:31.971 --rc geninfo_unexecuted_blocks=1 00:08:31.971 00:08:31.971 ' 00:08:31.971 14:10:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.971 --rc genhtml_branch_coverage=1 00:08:31.971 --rc genhtml_function_coverage=1 00:08:31.971 --rc genhtml_legend=1 00:08:31.971 --rc geninfo_all_blocks=1 00:08:31.971 --rc geninfo_unexecuted_blocks=1 00:08:31.971 00:08:31.971 ' 00:08:31.971 14:10:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:31.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.971 --rc genhtml_branch_coverage=1 00:08:31.971 --rc genhtml_function_coverage=1 00:08:31.971 --rc genhtml_legend=1 00:08:31.971 --rc geninfo_all_blocks=1 00:08:31.971 --rc geninfo_unexecuted_blocks=1 00:08:31.971 00:08:31.971 ' 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.971 14:10:23 -- nvmf/common.sh@7 -- # uname -s 00:08:31.971 14:10:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.971 14:10:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.971 14:10:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.971 14:10:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.971 14:10:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.971 14:10:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.971 14:10:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.971 14:10:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.971 14:10:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.971 14:10:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.971 14:10:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aecd29-332b-40b9-9f31-5e68bd0084a1 00:08:31.971 14:10:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aecd29-332b-40b9-9f31-5e68bd0084a1 00:08:31.971 14:10:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.971 14:10:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.971 14:10:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:31.971 14:10:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.971 14:10:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.971 14:10:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.971 14:10:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.971 14:10:23 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:31.971 14:10:23 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:31.971 14:10:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:31.971 14:10:23 -- paths/export.sh@5 -- # export PATH 00:08:31.971 14:10:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:31.971 14:10:23 -- nvmf/common.sh@46 -- # : 0 00:08:31.971 14:10:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:31.971 14:10:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:31.971 14:10:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:31.971 14:10:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.971 14:10:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.971 14:10:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:31.971 14:10:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:31.971 14:10:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:31.971 INFO: launching applications... 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=115273 00:08:31.971 Waiting for target to run... 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 115273 /var/tmp/spdk_tgt.sock 00:08:31.971 14:10:23 -- common/autotest_common.sh@829 -- # '[' -z 115273 ']' 00:08:31.971 14:10:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:31.971 14:10:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:31.971 14:10:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:31.971 14:10:23 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:31.971 14:10:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.971 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:08:31.971 [2024-11-18 14:10:23.936466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.971 [2024-11-18 14:10:23.937375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115273 ] 00:08:32.538 [2024-11-18 14:10:24.498619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.538 [2024-11-18 14:10:24.576674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:32.538 [2024-11-18 14:10:24.576940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.105 14:10:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.105 14:10:24 -- common/autotest_common.sh@862 -- # return 0 00:08:33.105 00:08:33.105 INFO: shutting down applications... 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 115273 ]] 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 115273 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115273 00:08:33.105 14:10:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:33.364 14:10:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:33.364 14:10:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:33.364 14:10:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115273 00:08:33.364 14:10:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115273 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:33.931 SPDK target shutdown done 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:33.931 Success 00:08:33.931 14:10:25 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:33.931 ************************************ 00:08:33.931 END TEST json_config_extra_key 00:08:33.931 ************************************ 00:08:33.931 00:08:33.931 real 0m2.212s 00:08:33.931 user 0m1.715s 00:08:33.931 sys 0m0.586s 00:08:33.931 14:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.931 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 14:10:25 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:33.931 14:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:33.931 14:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:33.931 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 ************************************ 00:08:33.931 START TEST alias_rpc 00:08:33.931 ************************************ 00:08:33.931 14:10:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:34.190 * Looking for test storage... 00:08:34.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:34.190 14:10:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.190 14:10:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.190 14:10:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.190 14:10:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.190 14:10:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.190 14:10:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.190 14:10:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.190 14:10:26 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.190 14:10:26 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.190 14:10:26 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.190 14:10:26 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.190 14:10:26 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.190 14:10:26 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.190 14:10:26 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.190 14:10:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.190 14:10:26 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.190 14:10:26 -- scripts/common.sh@344 -- # : 1 00:08:34.190 14:10:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.190 14:10:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.190 14:10:26 -- scripts/common.sh@364 -- # decimal 1 00:08:34.190 14:10:26 -- scripts/common.sh@352 -- # local d=1 00:08:34.190 14:10:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.190 14:10:26 -- scripts/common.sh@354 -- # echo 1 00:08:34.190 14:10:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.190 14:10:26 -- scripts/common.sh@365 -- # decimal 2 00:08:34.190 14:10:26 -- scripts/common.sh@352 -- # local d=2 00:08:34.191 14:10:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.191 14:10:26 -- scripts/common.sh@354 -- # echo 2 00:08:34.191 14:10:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.191 14:10:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.191 14:10:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.191 14:10:26 -- scripts/common.sh@367 -- # return 0 00:08:34.191 14:10:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.191 14:10:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.191 --rc genhtml_branch_coverage=1 00:08:34.191 --rc genhtml_function_coverage=1 00:08:34.191 --rc genhtml_legend=1 00:08:34.191 --rc geninfo_all_blocks=1 00:08:34.191 --rc geninfo_unexecuted_blocks=1 00:08:34.191 00:08:34.191 ' 00:08:34.191 14:10:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.191 --rc genhtml_branch_coverage=1 00:08:34.191 --rc genhtml_function_coverage=1 00:08:34.191 --rc genhtml_legend=1 00:08:34.191 --rc geninfo_all_blocks=1 00:08:34.191 --rc geninfo_unexecuted_blocks=1 00:08:34.191 00:08:34.191 ' 00:08:34.191 14:10:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.191 --rc genhtml_branch_coverage=1 00:08:34.191 --rc genhtml_function_coverage=1 00:08:34.191 --rc genhtml_legend=1 00:08:34.191 --rc geninfo_all_blocks=1 00:08:34.191 --rc geninfo_unexecuted_blocks=1 00:08:34.191 00:08:34.191 ' 00:08:34.191 14:10:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.191 --rc genhtml_branch_coverage=1 00:08:34.191 --rc genhtml_function_coverage=1 00:08:34.191 --rc genhtml_legend=1 00:08:34.191 --rc geninfo_all_blocks=1 00:08:34.191 --rc geninfo_unexecuted_blocks=1 00:08:34.191 00:08:34.191 ' 00:08:34.191 14:10:26 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:34.191 14:10:26 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=115361 00:08:34.191 14:10:26 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:34.191 14:10:26 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 115361 00:08:34.191 14:10:26 -- common/autotest_common.sh@829 -- # '[' -z 115361 ']' 00:08:34.191 14:10:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.191 14:10:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.191 14:10:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.191 14:10:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.191 14:10:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.191 [2024-11-18 14:10:26.217606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:34.191 [2024-11-18 14:10:26.217870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115361 ] 00:08:34.449 [2024-11-18 14:10:26.363555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.449 [2024-11-18 14:10:26.438706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:34.449 [2024-11-18 14:10:26.438948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.384 14:10:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.384 14:10:27 -- common/autotest_common.sh@862 -- # return 0 00:08:35.384 14:10:27 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:35.384 14:10:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 115361 00:08:35.384 14:10:27 -- common/autotest_common.sh@936 -- # '[' -z 115361 ']' 00:08:35.384 14:10:27 -- common/autotest_common.sh@940 -- # kill -0 115361 00:08:35.384 14:10:27 -- common/autotest_common.sh@941 -- # uname 00:08:35.384 14:10:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.384 14:10:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115361 00:08:35.384 14:10:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.384 14:10:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.384 killing process with pid 115361 00:08:35.384 14:10:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115361' 00:08:35.384 14:10:27 -- common/autotest_common.sh@955 -- # kill 115361 00:08:35.384 14:10:27 -- common/autotest_common.sh@960 -- # wait 115361 00:08:36.383 ************************************ 00:08:36.383 END TEST alias_rpc 00:08:36.383 ************************************ 00:08:36.383 00:08:36.383 real 0m2.095s 00:08:36.383 user 0m2.192s 00:08:36.383 sys 0m0.558s 00:08:36.383 14:10:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.383 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:36.383 14:10:28 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:08:36.383 14:10:28 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:36.383 14:10:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:36.383 14:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.383 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:36.383 ************************************ 00:08:36.383 START TEST spdkcli_tcp 00:08:36.383 ************************************ 00:08:36.383 14:10:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:36.383 * Looking for test storage... 00:08:36.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:36.383 14:10:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:36.383 14:10:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:36.383 14:10:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:36.383 14:10:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:36.383 14:10:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:36.383 14:10:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:36.383 14:10:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:36.383 14:10:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:36.383 14:10:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:36.383 14:10:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.383 14:10:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:36.383 14:10:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:36.383 14:10:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:36.383 14:10:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:36.383 14:10:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:36.383 14:10:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:36.383 14:10:28 -- scripts/common.sh@344 -- # : 1 00:08:36.383 14:10:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:36.383 14:10:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.383 14:10:28 -- scripts/common.sh@364 -- # decimal 1 00:08:36.383 14:10:28 -- scripts/common.sh@352 -- # local d=1 00:08:36.383 14:10:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.383 14:10:28 -- scripts/common.sh@354 -- # echo 1 00:08:36.383 14:10:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:36.383 14:10:28 -- scripts/common.sh@365 -- # decimal 2 00:08:36.383 14:10:28 -- scripts/common.sh@352 -- # local d=2 00:08:36.383 14:10:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.383 14:10:28 -- scripts/common.sh@354 -- # echo 2 00:08:36.383 14:10:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:36.383 14:10:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:36.383 14:10:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:36.383 14:10:28 -- scripts/common.sh@367 -- # return 0 00:08:36.383 14:10:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.383 14:10:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.383 --rc genhtml_branch_coverage=1 00:08:36.383 --rc genhtml_function_coverage=1 00:08:36.383 --rc genhtml_legend=1 00:08:36.383 --rc geninfo_all_blocks=1 00:08:36.383 --rc geninfo_unexecuted_blocks=1 00:08:36.383 00:08:36.383 ' 00:08:36.383 14:10:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.383 --rc genhtml_branch_coverage=1 00:08:36.383 --rc genhtml_function_coverage=1 00:08:36.383 --rc genhtml_legend=1 00:08:36.383 --rc geninfo_all_blocks=1 00:08:36.383 --rc geninfo_unexecuted_blocks=1 00:08:36.383 00:08:36.383 ' 00:08:36.383 14:10:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.383 --rc genhtml_branch_coverage=1 00:08:36.383 --rc genhtml_function_coverage=1 00:08:36.383 --rc genhtml_legend=1 00:08:36.383 --rc geninfo_all_blocks=1 00:08:36.383 --rc geninfo_unexecuted_blocks=1 00:08:36.383 00:08:36.383 ' 00:08:36.383 14:10:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:36.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.383 --rc genhtml_branch_coverage=1 00:08:36.383 --rc genhtml_function_coverage=1 00:08:36.383 --rc genhtml_legend=1 00:08:36.383 --rc geninfo_all_blocks=1 00:08:36.383 --rc geninfo_unexecuted_blocks=1 00:08:36.383 00:08:36.383 ' 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:36.383 14:10:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:36.383 14:10:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:36.383 14:10:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.383 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=115454 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@27 -- # waitforlisten 115454 00:08:36.383 14:10:28 -- common/autotest_common.sh@829 -- # '[' -z 115454 ']' 00:08:36.383 14:10:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.383 14:10:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.383 14:10:28 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:36.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.383 14:10:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.383 14:10:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.383 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:08:36.383 [2024-11-18 14:10:28.391346] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:36.383 [2024-11-18 14:10:28.392194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115454 ] 00:08:36.671 [2024-11-18 14:10:28.548085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.671 [2024-11-18 14:10:28.664288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.671 [2024-11-18 14:10:28.664988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.671 [2024-11-18 14:10:28.664941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.605 14:10:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.605 14:10:29 -- common/autotest_common.sh@862 -- # return 0 00:08:37.605 14:10:29 -- spdkcli/tcp.sh@31 -- # socat_pid=115476 00:08:37.605 14:10:29 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:37.605 14:10:29 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:37.605 [ 00:08:37.605 "spdk_get_version", 00:08:37.605 "rpc_get_methods", 00:08:37.605 "trace_get_info", 00:08:37.605 "trace_get_tpoint_group_mask", 00:08:37.605 "trace_disable_tpoint_group", 00:08:37.605 "trace_enable_tpoint_group", 00:08:37.605 "trace_clear_tpoint_mask", 00:08:37.605 "trace_set_tpoint_mask", 00:08:37.605 "framework_get_pci_devices", 00:08:37.605 "framework_get_config", 00:08:37.605 "framework_get_subsystems", 00:08:37.605 "iobuf_get_stats", 00:08:37.605 "iobuf_set_options", 00:08:37.605 "sock_set_default_impl", 00:08:37.605 "sock_impl_set_options", 00:08:37.605 "sock_impl_get_options", 00:08:37.605 "vmd_rescan", 00:08:37.605 "vmd_remove_device", 00:08:37.605 "vmd_enable", 00:08:37.605 "accel_get_stats", 00:08:37.605 "accel_set_options", 00:08:37.605 "accel_set_driver", 00:08:37.605 "accel_crypto_key_destroy", 00:08:37.605 "accel_crypto_keys_get", 00:08:37.605 "accel_crypto_key_create", 00:08:37.605 "accel_assign_opc", 00:08:37.605 "accel_get_module_info", 00:08:37.605 "accel_get_opc_assignments", 00:08:37.605 "notify_get_notifications", 00:08:37.605 "notify_get_types", 00:08:37.605 "bdev_get_histogram", 00:08:37.605 "bdev_enable_histogram", 00:08:37.605 "bdev_set_qos_limit", 00:08:37.605 "bdev_set_qd_sampling_period", 00:08:37.605 "bdev_get_bdevs", 00:08:37.605 "bdev_reset_iostat", 00:08:37.605 "bdev_get_iostat", 00:08:37.605 "bdev_examine", 00:08:37.605 "bdev_wait_for_examine", 00:08:37.605 "bdev_set_options", 00:08:37.605 "scsi_get_devices", 00:08:37.605 "thread_set_cpumask", 00:08:37.605 "framework_get_scheduler", 00:08:37.605 "framework_set_scheduler", 00:08:37.605 "framework_get_reactors", 00:08:37.605 "thread_get_io_channels", 00:08:37.605 "thread_get_pollers", 00:08:37.605 "thread_get_stats", 00:08:37.605 "framework_monitor_context_switch", 00:08:37.605 "spdk_kill_instance", 00:08:37.605 "log_enable_timestamps", 00:08:37.605 "log_get_flags", 00:08:37.605 "log_clear_flag", 00:08:37.606 "log_set_flag", 00:08:37.606 "log_get_level", 00:08:37.606 "log_set_level", 00:08:37.606 "log_get_print_level", 00:08:37.606 "log_set_print_level", 00:08:37.606 "framework_enable_cpumask_locks", 00:08:37.606 "framework_disable_cpumask_locks", 00:08:37.606 "framework_wait_init", 00:08:37.606 "framework_start_init", 00:08:37.606 "virtio_blk_create_transport", 00:08:37.606 "virtio_blk_get_transports", 00:08:37.606 "vhost_controller_set_coalescing", 00:08:37.606 "vhost_get_controllers", 00:08:37.606 "vhost_delete_controller", 00:08:37.606 "vhost_create_blk_controller", 00:08:37.606 "vhost_scsi_controller_remove_target", 00:08:37.606 "vhost_scsi_controller_add_target", 00:08:37.606 "vhost_start_scsi_controller", 00:08:37.606 "vhost_create_scsi_controller", 00:08:37.606 "nbd_get_disks", 00:08:37.606 "nbd_stop_disk", 00:08:37.606 "nbd_start_disk", 00:08:37.606 "env_dpdk_get_mem_stats", 00:08:37.606 "nvmf_subsystem_get_listeners", 00:08:37.606 "nvmf_subsystem_get_qpairs", 00:08:37.606 "nvmf_subsystem_get_controllers", 00:08:37.606 "nvmf_get_stats", 00:08:37.606 "nvmf_get_transports", 00:08:37.606 "nvmf_create_transport", 00:08:37.606 "nvmf_get_targets", 00:08:37.606 "nvmf_delete_target", 00:08:37.606 "nvmf_create_target", 00:08:37.606 "nvmf_subsystem_allow_any_host", 00:08:37.606 "nvmf_subsystem_remove_host", 00:08:37.606 "nvmf_subsystem_add_host", 00:08:37.606 "nvmf_subsystem_remove_ns", 00:08:37.606 "nvmf_subsystem_add_ns", 00:08:37.606 "nvmf_subsystem_listener_set_ana_state", 00:08:37.606 "nvmf_discovery_get_referrals", 00:08:37.606 "nvmf_discovery_remove_referral", 00:08:37.606 "nvmf_discovery_add_referral", 00:08:37.606 "nvmf_subsystem_remove_listener", 00:08:37.606 "nvmf_subsystem_add_listener", 00:08:37.606 "nvmf_delete_subsystem", 00:08:37.606 "nvmf_create_subsystem", 00:08:37.606 "nvmf_get_subsystems", 00:08:37.606 "nvmf_set_crdt", 00:08:37.606 "nvmf_set_config", 00:08:37.606 "nvmf_set_max_subsystems", 00:08:37.606 "iscsi_set_options", 00:08:37.606 "iscsi_get_auth_groups", 00:08:37.606 "iscsi_auth_group_remove_secret", 00:08:37.606 "iscsi_auth_group_add_secret", 00:08:37.606 "iscsi_delete_auth_group", 00:08:37.606 "iscsi_create_auth_group", 00:08:37.606 "iscsi_set_discovery_auth", 00:08:37.606 "iscsi_get_options", 00:08:37.606 "iscsi_target_node_request_logout", 00:08:37.606 "iscsi_target_node_set_redirect", 00:08:37.606 "iscsi_target_node_set_auth", 00:08:37.606 "iscsi_target_node_add_lun", 00:08:37.606 "iscsi_get_connections", 00:08:37.606 "iscsi_portal_group_set_auth", 00:08:37.606 "iscsi_start_portal_group", 00:08:37.606 "iscsi_delete_portal_group", 00:08:37.606 "iscsi_create_portal_group", 00:08:37.606 "iscsi_get_portal_groups", 00:08:37.606 "iscsi_delete_target_node", 00:08:37.606 "iscsi_target_node_remove_pg_ig_maps", 00:08:37.606 "iscsi_target_node_add_pg_ig_maps", 00:08:37.606 "iscsi_create_target_node", 00:08:37.606 "iscsi_get_target_nodes", 00:08:37.606 "iscsi_delete_initiator_group", 00:08:37.606 "iscsi_initiator_group_remove_initiators", 00:08:37.606 "iscsi_initiator_group_add_initiators", 00:08:37.606 "iscsi_create_initiator_group", 00:08:37.606 "iscsi_get_initiator_groups", 00:08:37.606 "iaa_scan_accel_module", 00:08:37.606 "dsa_scan_accel_module", 00:08:37.606 "ioat_scan_accel_module", 00:08:37.606 "accel_error_inject_error", 00:08:37.606 "bdev_iscsi_delete", 00:08:37.606 "bdev_iscsi_create", 00:08:37.606 "bdev_iscsi_set_options", 00:08:37.606 "bdev_virtio_attach_controller", 00:08:37.606 "bdev_virtio_scsi_get_devices", 00:08:37.606 "bdev_virtio_detach_controller", 00:08:37.606 "bdev_virtio_blk_set_hotplug", 00:08:37.606 "bdev_ftl_set_property", 00:08:37.606 "bdev_ftl_get_properties", 00:08:37.606 "bdev_ftl_get_stats", 00:08:37.606 "bdev_ftl_unmap", 00:08:37.606 "bdev_ftl_unload", 00:08:37.606 "bdev_ftl_delete", 00:08:37.606 "bdev_ftl_load", 00:08:37.606 "bdev_ftl_create", 00:08:37.606 "bdev_aio_delete", 00:08:37.606 "bdev_aio_rescan", 00:08:37.606 "bdev_aio_create", 00:08:37.606 "blobfs_create", 00:08:37.606 "blobfs_detect", 00:08:37.606 "blobfs_set_cache_size", 00:08:37.606 "bdev_zone_block_delete", 00:08:37.606 "bdev_zone_block_create", 00:08:37.606 "bdev_delay_delete", 00:08:37.606 "bdev_delay_create", 00:08:37.606 "bdev_delay_update_latency", 00:08:37.606 "bdev_split_delete", 00:08:37.606 "bdev_split_create", 00:08:37.606 "bdev_error_inject_error", 00:08:37.606 "bdev_error_delete", 00:08:37.606 "bdev_error_create", 00:08:37.606 "bdev_raid_set_options", 00:08:37.606 "bdev_raid_remove_base_bdev", 00:08:37.606 "bdev_raid_add_base_bdev", 00:08:37.606 "bdev_raid_delete", 00:08:37.606 "bdev_raid_create", 00:08:37.606 "bdev_raid_get_bdevs", 00:08:37.606 "bdev_lvol_grow_lvstore", 00:08:37.606 "bdev_lvol_get_lvols", 00:08:37.606 "bdev_lvol_get_lvstores", 00:08:37.606 "bdev_lvol_delete", 00:08:37.606 "bdev_lvol_set_read_only", 00:08:37.606 "bdev_lvol_resize", 00:08:37.606 "bdev_lvol_decouple_parent", 00:08:37.606 "bdev_lvol_inflate", 00:08:37.606 "bdev_lvol_rename", 00:08:37.606 "bdev_lvol_clone_bdev", 00:08:37.606 "bdev_lvol_clone", 00:08:37.606 "bdev_lvol_snapshot", 00:08:37.606 "bdev_lvol_create", 00:08:37.606 "bdev_lvol_delete_lvstore", 00:08:37.606 "bdev_lvol_rename_lvstore", 00:08:37.606 "bdev_lvol_create_lvstore", 00:08:37.606 "bdev_passthru_delete", 00:08:37.606 "bdev_passthru_create", 00:08:37.606 "bdev_nvme_cuse_unregister", 00:08:37.606 "bdev_nvme_cuse_register", 00:08:37.606 "bdev_opal_new_user", 00:08:37.606 "bdev_opal_set_lock_state", 00:08:37.606 "bdev_opal_delete", 00:08:37.606 "bdev_opal_get_info", 00:08:37.606 "bdev_opal_create", 00:08:37.606 "bdev_nvme_opal_revert", 00:08:37.606 "bdev_nvme_opal_init", 00:08:37.606 "bdev_nvme_send_cmd", 00:08:37.606 "bdev_nvme_get_path_iostat", 00:08:37.606 "bdev_nvme_get_mdns_discovery_info", 00:08:37.606 "bdev_nvme_stop_mdns_discovery", 00:08:37.606 "bdev_nvme_start_mdns_discovery", 00:08:37.606 "bdev_nvme_set_multipath_policy", 00:08:37.606 "bdev_nvme_set_preferred_path", 00:08:37.606 "bdev_nvme_get_io_paths", 00:08:37.606 "bdev_nvme_remove_error_injection", 00:08:37.606 "bdev_nvme_add_error_injection", 00:08:37.606 "bdev_nvme_get_discovery_info", 00:08:37.606 "bdev_nvme_stop_discovery", 00:08:37.606 "bdev_nvme_start_discovery", 00:08:37.606 "bdev_nvme_get_controller_health_info", 00:08:37.606 "bdev_nvme_disable_controller", 00:08:37.606 "bdev_nvme_enable_controller", 00:08:37.606 "bdev_nvme_reset_controller", 00:08:37.606 "bdev_nvme_get_transport_statistics", 00:08:37.606 "bdev_nvme_apply_firmware", 00:08:37.606 "bdev_nvme_detach_controller", 00:08:37.606 "bdev_nvme_get_controllers", 00:08:37.606 "bdev_nvme_attach_controller", 00:08:37.606 "bdev_nvme_set_hotplug", 00:08:37.606 "bdev_nvme_set_options", 00:08:37.606 "bdev_null_resize", 00:08:37.606 "bdev_null_delete", 00:08:37.606 "bdev_null_create", 00:08:37.606 "bdev_malloc_delete", 00:08:37.606 "bdev_malloc_create" 00:08:37.606 ] 00:08:37.868 14:10:29 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:37.868 14:10:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.868 14:10:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.868 14:10:29 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:37.868 14:10:29 -- spdkcli/tcp.sh@38 -- # killprocess 115454 00:08:37.868 14:10:29 -- common/autotest_common.sh@936 -- # '[' -z 115454 ']' 00:08:37.868 14:10:29 -- common/autotest_common.sh@940 -- # kill -0 115454 00:08:37.868 14:10:29 -- common/autotest_common.sh@941 -- # uname 00:08:37.868 14:10:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.868 14:10:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115454 00:08:37.868 14:10:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.868 killing process with pid 115454 00:08:37.868 14:10:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.868 14:10:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115454' 00:08:37.868 14:10:29 -- common/autotest_common.sh@955 -- # kill 115454 00:08:37.868 14:10:29 -- common/autotest_common.sh@960 -- # wait 115454 00:08:38.436 00:08:38.436 real 0m2.176s 00:08:38.436 user 0m3.829s 00:08:38.436 sys 0m0.636s 00:08:38.436 ************************************ 00:08:38.436 END TEST spdkcli_tcp 00:08:38.436 ************************************ 00:08:38.436 14:10:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.436 14:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:38.436 14:10:30 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:38.436 14:10:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.436 14:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.436 14:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:38.436 ************************************ 00:08:38.436 START TEST dpdk_mem_utility 00:08:38.436 ************************************ 00:08:38.436 14:10:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:38.436 * Looking for test storage... 00:08:38.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:38.436 14:10:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.436 14:10:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.436 14:10:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.436 14:10:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.436 14:10:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.436 14:10:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.436 14:10:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.436 14:10:30 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.436 14:10:30 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.436 14:10:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.436 14:10:30 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.436 14:10:30 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.436 14:10:30 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.436 14:10:30 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.436 14:10:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.436 14:10:30 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.436 14:10:30 -- scripts/common.sh@344 -- # : 1 00:08:38.436 14:10:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.436 14:10:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.436 14:10:30 -- scripts/common.sh@364 -- # decimal 1 00:08:38.436 14:10:30 -- scripts/common.sh@352 -- # local d=1 00:08:38.436 14:10:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.436 14:10:30 -- scripts/common.sh@354 -- # echo 1 00:08:38.436 14:10:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.436 14:10:30 -- scripts/common.sh@365 -- # decimal 2 00:08:38.694 14:10:30 -- scripts/common.sh@352 -- # local d=2 00:08:38.695 14:10:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.695 14:10:30 -- scripts/common.sh@354 -- # echo 2 00:08:38.695 14:10:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.695 14:10:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.695 14:10:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.695 14:10:30 -- scripts/common.sh@367 -- # return 0 00:08:38.695 14:10:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.695 14:10:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.695 --rc genhtml_branch_coverage=1 00:08:38.695 --rc genhtml_function_coverage=1 00:08:38.695 --rc genhtml_legend=1 00:08:38.695 --rc geninfo_all_blocks=1 00:08:38.695 --rc geninfo_unexecuted_blocks=1 00:08:38.695 00:08:38.695 ' 00:08:38.695 14:10:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.695 --rc genhtml_branch_coverage=1 00:08:38.695 --rc genhtml_function_coverage=1 00:08:38.695 --rc genhtml_legend=1 00:08:38.695 --rc geninfo_all_blocks=1 00:08:38.695 --rc geninfo_unexecuted_blocks=1 00:08:38.695 00:08:38.695 ' 00:08:38.695 14:10:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.695 --rc genhtml_branch_coverage=1 00:08:38.695 --rc genhtml_function_coverage=1 00:08:38.695 --rc genhtml_legend=1 00:08:38.695 --rc geninfo_all_blocks=1 00:08:38.695 --rc geninfo_unexecuted_blocks=1 00:08:38.695 00:08:38.695 ' 00:08:38.695 14:10:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.695 --rc genhtml_branch_coverage=1 00:08:38.695 --rc genhtml_function_coverage=1 00:08:38.695 --rc genhtml_legend=1 00:08:38.695 --rc geninfo_all_blocks=1 00:08:38.695 --rc geninfo_unexecuted_blocks=1 00:08:38.695 00:08:38.695 ' 00:08:38.695 14:10:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:38.695 14:10:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=115562 00:08:38.695 14:10:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 115562 00:08:38.695 14:10:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:38.695 14:10:30 -- common/autotest_common.sh@829 -- # '[' -z 115562 ']' 00:08:38.695 14:10:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.695 14:10:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.695 14:10:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.695 14:10:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.695 14:10:30 -- common/autotest_common.sh@10 -- # set +x 00:08:38.695 [2024-11-18 14:10:30.584810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.695 [2024-11-18 14:10:30.585064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115562 ] 00:08:38.695 [2024-11-18 14:10:30.727190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.954 [2024-11-18 14:10:30.801322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.954 [2024-11-18 14:10:30.801569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.525 14:10:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.525 14:10:31 -- common/autotest_common.sh@862 -- # return 0 00:08:39.525 14:10:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:39.525 14:10:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:39.525 14:10:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.525 14:10:31 -- common/autotest_common.sh@10 -- # set +x 00:08:39.525 { 00:08:39.525 "filename": "/tmp/spdk_mem_dump.txt" 00:08:39.525 } 00:08:39.525 14:10:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.525 14:10:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:39.525 DPDK memory size 814.000000 MiB in 1 heap(s) 00:08:39.525 1 heaps totaling size 814.000000 MiB 00:08:39.525 size: 814.000000 MiB heap id: 0 00:08:39.525 end heaps---------- 00:08:39.525 8 mempools totaling size 598.116089 MiB 00:08:39.525 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:39.525 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:39.525 size: 84.521057 MiB name: bdev_io_115562 00:08:39.525 size: 51.011292 MiB name: evtpool_115562 00:08:39.525 size: 50.003479 MiB name: msgpool_115562 00:08:39.525 size: 21.763794 MiB name: PDU_Pool 00:08:39.525 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:39.525 size: 0.026123 MiB name: Session_Pool 00:08:39.525 end mempools------- 00:08:39.525 6 memzones totaling size 4.142822 MiB 00:08:39.525 size: 1.000366 MiB name: RG_ring_0_115562 00:08:39.525 size: 1.000366 MiB name: RG_ring_1_115562 00:08:39.525 size: 1.000366 MiB name: RG_ring_4_115562 00:08:39.525 size: 1.000366 MiB name: RG_ring_5_115562 00:08:39.525 size: 0.125366 MiB name: RG_ring_2_115562 00:08:39.525 size: 0.015991 MiB name: RG_ring_3_115562 00:08:39.525 end memzones------- 00:08:39.525 14:10:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:39.525 heap id: 0 total size: 814.000000 MiB number of busy elements: 220 number of free elements: 15 00:08:39.525 list of free elements. size: 12.486572 MiB 00:08:39.525 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:39.525 element at address: 0x200018e00000 with size: 0.999878 MiB 00:08:39.525 element at address: 0x200019000000 with size: 0.999878 MiB 00:08:39.525 element at address: 0x200003e00000 with size: 0.996277 MiB 00:08:39.525 element at address: 0x200031c00000 with size: 0.994446 MiB 00:08:39.525 element at address: 0x200013800000 with size: 0.978699 MiB 00:08:39.525 element at address: 0x200007000000 with size: 0.959839 MiB 00:08:39.525 element at address: 0x200019200000 with size: 0.936584 MiB 00:08:39.525 element at address: 0x200000200000 with size: 0.837219 MiB 00:08:39.525 element at address: 0x20001aa00000 with size: 0.568237 MiB 00:08:39.525 element at address: 0x20000b200000 with size: 0.489807 MiB 00:08:39.525 element at address: 0x200000800000 with size: 0.486511 MiB 00:08:39.525 element at address: 0x200019400000 with size: 0.485657 MiB 00:08:39.525 element at address: 0x200027e00000 with size: 0.402527 MiB 00:08:39.525 element at address: 0x200003a00000 with size: 0.351501 MiB 00:08:39.525 list of standard malloc elements. size: 199.250854 MiB 00:08:39.525 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:08:39.525 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:08:39.525 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:39.525 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:08:39.525 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:39.525 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:39.525 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:08:39.525 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:39.525 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:08:39.525 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087c980 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003adb300 with size: 0.000183 MiB 00:08:39.525 element at address: 0x200003adb500 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200003affa80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:08:39.526 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e670c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e67180 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6dd80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:08:39.526 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:08:39.527 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:08:39.527 list of memzone associated elements. size: 602.262573 MiB 00:08:39.527 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:08:39.527 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:39.527 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:08:39.527 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:39.527 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:08:39.527 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_115562_0 00:08:39.527 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:39.527 associated memzone info: size: 48.002930 MiB name: MP_evtpool_115562_0 00:08:39.527 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:39.527 associated memzone info: size: 48.002930 MiB name: MP_msgpool_115562_0 00:08:39.527 element at address: 0x2000195be940 with size: 20.255554 MiB 00:08:39.527 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:39.527 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:08:39.527 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:39.527 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:39.527 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_115562 00:08:39.527 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:39.527 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_115562 00:08:39.527 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:39.527 associated memzone info: size: 1.007996 MiB name: MP_evtpool_115562 00:08:39.527 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:08:39.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:39.527 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:08:39.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:39.527 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:08:39.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:39.527 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:08:39.527 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:39.527 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:39.527 associated memzone info: size: 1.000366 MiB name: RG_ring_0_115562 00:08:39.527 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:39.527 associated memzone info: size: 1.000366 MiB name: RG_ring_1_115562 00:08:39.527 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:08:39.527 associated memzone info: size: 1.000366 MiB name: RG_ring_4_115562 00:08:39.527 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:08:39.527 associated memzone info: size: 1.000366 MiB name: RG_ring_5_115562 00:08:39.527 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:08:39.527 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_115562 00:08:39.527 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:08:39.527 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:39.527 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:08:39.527 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:39.527 element at address: 0x20001947c540 with size: 0.250488 MiB 00:08:39.527 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:39.527 element at address: 0x200003adf880 with size: 0.125488 MiB 00:08:39.527 associated memzone info: size: 0.125366 MiB name: RG_ring_2_115562 00:08:39.527 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:08:39.527 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:39.527 element at address: 0x200027e67240 with size: 0.023743 MiB 00:08:39.527 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:39.527 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:08:39.527 associated memzone info: size: 0.015991 MiB name: RG_ring_3_115562 00:08:39.527 element at address: 0x200027e6d380 with size: 0.002441 MiB 00:08:39.527 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:39.527 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:08:39.527 associated memzone info: size: 0.000183 MiB name: MP_msgpool_115562 00:08:39.527 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:08:39.527 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_115562 00:08:39.527 element at address: 0x200027e6de40 with size: 0.000305 MiB 00:08:39.527 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:39.527 14:10:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:39.527 14:10:31 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 115562 00:08:39.527 14:10:31 -- common/autotest_common.sh@936 -- # '[' -z 115562 ']' 00:08:39.527 14:10:31 -- common/autotest_common.sh@940 -- # kill -0 115562 00:08:39.527 14:10:31 -- common/autotest_common.sh@941 -- # uname 00:08:39.527 14:10:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.527 14:10:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115562 00:08:39.527 14:10:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.527 killing process with pid 115562 00:08:39.527 14:10:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.527 14:10:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115562' 00:08:39.527 14:10:31 -- common/autotest_common.sh@955 -- # kill 115562 00:08:39.527 14:10:31 -- common/autotest_common.sh@960 -- # wait 115562 00:08:40.463 ************************************ 00:08:40.463 END TEST dpdk_mem_utility 00:08:40.463 ************************************ 00:08:40.463 00:08:40.463 real 0m1.872s 00:08:40.463 user 0m1.785s 00:08:40.463 sys 0m0.523s 00:08:40.463 14:10:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.463 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:40.463 14:10:32 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:40.463 14:10:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.463 14:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.463 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:40.463 ************************************ 00:08:40.463 START TEST event 00:08:40.463 ************************************ 00:08:40.463 14:10:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:40.463 * Looking for test storage... 00:08:40.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:40.463 14:10:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:40.463 14:10:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:40.463 14:10:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:40.463 14:10:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:40.463 14:10:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:40.463 14:10:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:40.463 14:10:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:40.463 14:10:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:40.463 14:10:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:40.463 14:10:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.463 14:10:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:40.463 14:10:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:40.463 14:10:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:40.463 14:10:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:40.463 14:10:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:40.463 14:10:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:40.463 14:10:32 -- scripts/common.sh@344 -- # : 1 00:08:40.463 14:10:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:40.463 14:10:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.463 14:10:32 -- scripts/common.sh@364 -- # decimal 1 00:08:40.463 14:10:32 -- scripts/common.sh@352 -- # local d=1 00:08:40.463 14:10:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.463 14:10:32 -- scripts/common.sh@354 -- # echo 1 00:08:40.463 14:10:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:40.463 14:10:32 -- scripts/common.sh@365 -- # decimal 2 00:08:40.463 14:10:32 -- scripts/common.sh@352 -- # local d=2 00:08:40.463 14:10:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.463 14:10:32 -- scripts/common.sh@354 -- # echo 2 00:08:40.463 14:10:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:40.463 14:10:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:40.463 14:10:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:40.463 14:10:32 -- scripts/common.sh@367 -- # return 0 00:08:40.463 14:10:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.463 14:10:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.463 --rc genhtml_branch_coverage=1 00:08:40.463 --rc genhtml_function_coverage=1 00:08:40.463 --rc genhtml_legend=1 00:08:40.463 --rc geninfo_all_blocks=1 00:08:40.463 --rc geninfo_unexecuted_blocks=1 00:08:40.463 00:08:40.463 ' 00:08:40.463 14:10:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.463 --rc genhtml_branch_coverage=1 00:08:40.463 --rc genhtml_function_coverage=1 00:08:40.463 --rc genhtml_legend=1 00:08:40.463 --rc geninfo_all_blocks=1 00:08:40.463 --rc geninfo_unexecuted_blocks=1 00:08:40.463 00:08:40.463 ' 00:08:40.463 14:10:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.463 --rc genhtml_branch_coverage=1 00:08:40.463 --rc genhtml_function_coverage=1 00:08:40.463 --rc genhtml_legend=1 00:08:40.463 --rc geninfo_all_blocks=1 00:08:40.463 --rc geninfo_unexecuted_blocks=1 00:08:40.463 00:08:40.463 ' 00:08:40.463 14:10:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:40.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.463 --rc genhtml_branch_coverage=1 00:08:40.463 --rc genhtml_function_coverage=1 00:08:40.463 --rc genhtml_legend=1 00:08:40.463 --rc geninfo_all_blocks=1 00:08:40.463 --rc geninfo_unexecuted_blocks=1 00:08:40.463 00:08:40.463 ' 00:08:40.463 14:10:32 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:40.463 14:10:32 -- bdev/nbd_common.sh@6 -- # set -e 00:08:40.463 14:10:32 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:40.463 14:10:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:40.464 14:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.464 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:08:40.464 ************************************ 00:08:40.464 START TEST event_perf 00:08:40.464 ************************************ 00:08:40.464 14:10:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:40.464 Running I/O for 1 seconds...[2024-11-18 14:10:32.488193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:40.464 [2024-11-18 14:10:32.488467] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115658 ] 00:08:40.721 [2024-11-18 14:10:32.656468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.721 [2024-11-18 14:10:32.792625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.722 [2024-11-18 14:10:32.792710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.722 [2024-11-18 14:10:32.792826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.722 [2024-11-18 14:10:32.792831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.097 Running I/O for 1 seconds... 00:08:42.097 lcore 0: 185198 00:08:42.097 lcore 1: 185200 00:08:42.097 lcore 2: 185203 00:08:42.097 lcore 3: 185197 00:08:42.097 done. 00:08:42.097 00:08:42.097 real 0m1.485s 00:08:42.097 user 0m4.267s 00:08:42.097 sys 0m0.116s 00:08:42.097 14:10:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.097 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:42.097 ************************************ 00:08:42.097 END TEST event_perf 00:08:42.097 ************************************ 00:08:42.097 14:10:33 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:42.097 14:10:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:42.097 14:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.097 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:08:42.097 ************************************ 00:08:42.097 START TEST event_reactor 00:08:42.097 ************************************ 00:08:42.097 14:10:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:42.097 [2024-11-18 14:10:34.026323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:42.097 [2024-11-18 14:10:34.026674] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115710 ] 00:08:42.355 [2024-11-18 14:10:34.184870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.355 [2024-11-18 14:10:34.317547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.731 test_start 00:08:43.731 oneshot 00:08:43.731 tick 100 00:08:43.731 tick 100 00:08:43.731 tick 250 00:08:43.731 tick 100 00:08:43.731 tick 100 00:08:43.731 tick 250 00:08:43.731 tick 100 00:08:43.731 tick 500 00:08:43.731 tick 100 00:08:43.731 tick 100 00:08:43.731 tick 250 00:08:43.731 tick 100 00:08:43.731 tick 100 00:08:43.731 test_end 00:08:43.731 00:08:43.731 real 0m1.473s 00:08:43.731 user 0m1.268s 00:08:43.731 sys 0m0.105s 00:08:43.731 14:10:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.731 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:08:43.731 ************************************ 00:08:43.731 END TEST event_reactor 00:08:43.731 ************************************ 00:08:43.731 14:10:35 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:43.731 14:10:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:43.731 14:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.731 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:08:43.731 ************************************ 00:08:43.731 START TEST event_reactor_perf 00:08:43.731 ************************************ 00:08:43.731 14:10:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:43.731 [2024-11-18 14:10:35.536387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:43.731 [2024-11-18 14:10:35.536641] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115751 ] 00:08:43.731 [2024-11-18 14:10:35.687030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.731 [2024-11-18 14:10:35.766444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.108 test_start 00:08:45.108 test_end 00:08:45.108 Performance: 373882 events per second 00:08:45.108 00:08:45.108 real 0m1.362s 00:08:45.108 user 0m1.170s 00:08:45.108 sys 0m0.092s 00:08:45.108 14:10:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.108 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.108 ************************************ 00:08:45.108 END TEST event_reactor_perf 00:08:45.108 ************************************ 00:08:45.108 14:10:36 -- event/event.sh@49 -- # uname -s 00:08:45.108 14:10:36 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:45.108 14:10:36 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:45.108 14:10:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.108 14:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.108 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.108 ************************************ 00:08:45.108 START TEST event_scheduler 00:08:45.108 ************************************ 00:08:45.108 14:10:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:45.108 * Looking for test storage... 00:08:45.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:45.108 14:10:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:45.108 14:10:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:45.108 14:10:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:45.108 14:10:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:45.108 14:10:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:45.108 14:10:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:45.108 14:10:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:45.108 14:10:37 -- scripts/common.sh@335 -- # IFS=.-: 00:08:45.108 14:10:37 -- scripts/common.sh@335 -- # read -ra ver1 00:08:45.108 14:10:37 -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.108 14:10:37 -- scripts/common.sh@336 -- # read -ra ver2 00:08:45.108 14:10:37 -- scripts/common.sh@337 -- # local 'op=<' 00:08:45.108 14:10:37 -- scripts/common.sh@339 -- # ver1_l=2 00:08:45.108 14:10:37 -- scripts/common.sh@340 -- # ver2_l=1 00:08:45.108 14:10:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:45.108 14:10:37 -- scripts/common.sh@343 -- # case "$op" in 00:08:45.108 14:10:37 -- scripts/common.sh@344 -- # : 1 00:08:45.108 14:10:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:45.108 14:10:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.108 14:10:37 -- scripts/common.sh@364 -- # decimal 1 00:08:45.108 14:10:37 -- scripts/common.sh@352 -- # local d=1 00:08:45.108 14:10:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.108 14:10:37 -- scripts/common.sh@354 -- # echo 1 00:08:45.108 14:10:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:45.108 14:10:37 -- scripts/common.sh@365 -- # decimal 2 00:08:45.108 14:10:37 -- scripts/common.sh@352 -- # local d=2 00:08:45.108 14:10:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.108 14:10:37 -- scripts/common.sh@354 -- # echo 2 00:08:45.108 14:10:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:45.108 14:10:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:45.108 14:10:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:45.108 14:10:37 -- scripts/common.sh@367 -- # return 0 00:08:45.108 14:10:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.108 14:10:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.108 --rc genhtml_branch_coverage=1 00:08:45.108 --rc genhtml_function_coverage=1 00:08:45.108 --rc genhtml_legend=1 00:08:45.108 --rc geninfo_all_blocks=1 00:08:45.108 --rc geninfo_unexecuted_blocks=1 00:08:45.108 00:08:45.108 ' 00:08:45.108 14:10:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.108 --rc genhtml_branch_coverage=1 00:08:45.108 --rc genhtml_function_coverage=1 00:08:45.108 --rc genhtml_legend=1 00:08:45.108 --rc geninfo_all_blocks=1 00:08:45.108 --rc geninfo_unexecuted_blocks=1 00:08:45.108 00:08:45.108 ' 00:08:45.108 14:10:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.108 --rc genhtml_branch_coverage=1 00:08:45.108 --rc genhtml_function_coverage=1 00:08:45.108 --rc genhtml_legend=1 00:08:45.108 --rc geninfo_all_blocks=1 00:08:45.108 --rc geninfo_unexecuted_blocks=1 00:08:45.108 00:08:45.108 ' 00:08:45.108 14:10:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:45.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.108 --rc genhtml_branch_coverage=1 00:08:45.108 --rc genhtml_function_coverage=1 00:08:45.108 --rc genhtml_legend=1 00:08:45.108 --rc geninfo_all_blocks=1 00:08:45.108 --rc geninfo_unexecuted_blocks=1 00:08:45.108 00:08:45.108 ' 00:08:45.108 14:10:37 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:45.108 14:10:37 -- scheduler/scheduler.sh@35 -- # scheduler_pid=115832 00:08:45.108 14:10:37 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.108 14:10:37 -- scheduler/scheduler.sh@37 -- # waitforlisten 115832 00:08:45.108 14:10:37 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:45.108 14:10:37 -- common/autotest_common.sh@829 -- # '[' -z 115832 ']' 00:08:45.108 14:10:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.108 14:10:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.108 14:10:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.108 14:10:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.108 14:10:37 -- common/autotest_common.sh@10 -- # set +x 00:08:45.108 [2024-11-18 14:10:37.153763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:45.108 [2024-11-18 14:10:37.154238] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115832 ] 00:08:45.367 [2024-11-18 14:10:37.331197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.367 [2024-11-18 14:10:37.407435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.367 [2024-11-18 14:10:37.407637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.367 [2024-11-18 14:10:37.407734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.367 [2024-11-18 14:10:37.407734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.304 14:10:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.304 14:10:38 -- common/autotest_common.sh@862 -- # return 0 00:08:46.304 14:10:38 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 POWER: Env isn't set yet! 00:08:46.305 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:46.305 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.305 POWER: Cannot set governor of lcore 0 to userspace 00:08:46.305 POWER: Attempting to initialise PSTAT power management... 00:08:46.305 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.305 POWER: Cannot set governor of lcore 0 to performance 00:08:46.305 POWER: Attempting to initialise CPPC power management... 00:08:46.305 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:46.305 POWER: Cannot set governor of lcore 0 to userspace 00:08:46.305 POWER: Attempting to initialise VM power management... 00:08:46.305 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:46.305 POWER: Unable to set Power Management Environment for lcore 0 00:08:46.305 [2024-11-18 14:10:38.138374] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:46.305 [2024-11-18 14:10:38.138442] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:46.305 [2024-11-18 14:10:38.138480] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:46.305 [2024-11-18 14:10:38.138537] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:46.305 [2024-11-18 14:10:38.138596] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:46.305 [2024-11-18 14:10:38.138613] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 [2024-11-18 14:10:38.257474] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:46.305 14:10:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.305 14:10:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 ************************************ 00:08:46.305 START TEST scheduler_create_thread 00:08:46.305 ************************************ 00:08:46.305 14:10:38 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 2 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 3 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 4 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 5 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 6 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 7 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 8 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 9 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 10 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.305 14:10:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.305 14:10:38 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:46.305 14:10:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.305 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 14:10:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.681 14:10:39 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:47.681 14:10:39 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:47.681 14:10:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.681 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:08:48.618 14:10:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.618 00:08:48.618 real 0m2.145s 00:08:48.618 user 0m0.023s 00:08:48.618 sys 0m0.000s 00:08:48.618 14:10:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.618 14:10:40 -- common/autotest_common.sh@10 -- # set +x 00:08:48.618 ************************************ 00:08:48.618 END TEST scheduler_create_thread 00:08:48.618 ************************************ 00:08:48.618 14:10:40 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:48.618 14:10:40 -- scheduler/scheduler.sh@46 -- # killprocess 115832 00:08:48.618 14:10:40 -- common/autotest_common.sh@936 -- # '[' -z 115832 ']' 00:08:48.618 14:10:40 -- common/autotest_common.sh@940 -- # kill -0 115832 00:08:48.618 14:10:40 -- common/autotest_common.sh@941 -- # uname 00:08:48.618 14:10:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:48.618 14:10:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115832 00:08:48.618 14:10:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:08:48.618 14:10:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:08:48.618 killing process with pid 115832 00:08:48.618 14:10:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115832' 00:08:48.618 14:10:40 -- common/autotest_common.sh@955 -- # kill 115832 00:08:48.618 14:10:40 -- common/autotest_common.sh@960 -- # wait 115832 00:08:48.877 [2024-11-18 14:10:40.897531] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:49.136 00:08:49.136 real 0m4.268s 00:08:49.136 user 0m7.758s 00:08:49.136 sys 0m0.401s 00:08:49.136 14:10:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.136 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:49.136 ************************************ 00:08:49.136 END TEST event_scheduler 00:08:49.136 ************************************ 00:08:49.395 14:10:41 -- event/event.sh@51 -- # modprobe -n nbd 00:08:49.395 14:10:41 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:49.395 14:10:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.395 14:10:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.395 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:49.395 ************************************ 00:08:49.395 START TEST app_repeat 00:08:49.395 ************************************ 00:08:49.395 14:10:41 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:08:49.395 14:10:41 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.395 14:10:41 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.395 14:10:41 -- event/event.sh@13 -- # local nbd_list 00:08:49.395 14:10:41 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.395 14:10:41 -- event/event.sh@14 -- # local bdev_list 00:08:49.395 14:10:41 -- event/event.sh@15 -- # local repeat_times=4 00:08:49.395 14:10:41 -- event/event.sh@17 -- # modprobe nbd 00:08:49.395 14:10:41 -- event/event.sh@19 -- # repeat_pid=115941 00:08:49.395 14:10:41 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:49.395 14:10:41 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.395 Process app_repeat pid: 115941 00:08:49.395 14:10:41 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 115941' 00:08:49.395 14:10:41 -- event/event.sh@23 -- # for i in {0..2} 00:08:49.395 spdk_app_start Round 0 00:08:49.395 14:10:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:49.395 14:10:41 -- event/event.sh@25 -- # waitforlisten 115941 /var/tmp/spdk-nbd.sock 00:08:49.395 14:10:41 -- common/autotest_common.sh@829 -- # '[' -z 115941 ']' 00:08:49.395 14:10:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.395 14:10:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.395 14:10:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.395 14:10:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.395 14:10:41 -- common/autotest_common.sh@10 -- # set +x 00:08:49.395 [2024-11-18 14:10:41.289365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:49.395 [2024-11-18 14:10:41.289563] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115941 ] 00:08:49.395 [2024-11-18 14:10:41.428586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.654 [2024-11-18 14:10:41.508157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.654 [2024-11-18 14:10:41.508161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.221 14:10:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.221 14:10:42 -- common/autotest_common.sh@862 -- # return 0 00:08:50.221 14:10:42 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.480 Malloc0 00:08:50.480 14:10:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.739 Malloc1 00:08:50.739 14:10:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@12 -- # local i 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.739 14:10:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:50.998 /dev/nbd0 00:08:50.998 14:10:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:50.998 14:10:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:50.998 14:10:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:50.998 14:10:42 -- common/autotest_common.sh@867 -- # local i 00:08:50.998 14:10:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:50.998 14:10:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:50.998 14:10:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:50.998 14:10:42 -- common/autotest_common.sh@871 -- # break 00:08:50.998 14:10:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:50.998 14:10:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:50.998 14:10:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:50.998 1+0 records in 00:08:50.998 1+0 records out 00:08:50.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296449 s, 13.8 MB/s 00:08:50.998 14:10:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.998 14:10:42 -- common/autotest_common.sh@884 -- # size=4096 00:08:50.998 14:10:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:50.998 14:10:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:50.998 14:10:42 -- common/autotest_common.sh@887 -- # return 0 00:08:50.998 14:10:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:50.998 14:10:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:50.998 14:10:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:51.257 /dev/nbd1 00:08:51.257 14:10:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:51.257 14:10:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:51.257 14:10:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:51.257 14:10:43 -- common/autotest_common.sh@867 -- # local i 00:08:51.257 14:10:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:51.257 14:10:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:51.257 14:10:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:51.257 14:10:43 -- common/autotest_common.sh@871 -- # break 00:08:51.257 14:10:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:51.257 14:10:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:51.257 14:10:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.257 1+0 records in 00:08:51.257 1+0 records out 00:08:51.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213917 s, 19.1 MB/s 00:08:51.257 14:10:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.257 14:10:43 -- common/autotest_common.sh@884 -- # size=4096 00:08:51.257 14:10:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.257 14:10:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:51.257 14:10:43 -- common/autotest_common.sh@887 -- # return 0 00:08:51.257 14:10:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.257 14:10:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.257 14:10:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.257 14:10:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.258 14:10:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd0", 00:08:51.517 "bdev_name": "Malloc0" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd1", 00:08:51.517 "bdev_name": "Malloc1" 00:08:51.517 } 00:08:51.517 ]' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd0", 00:08:51.517 "bdev_name": "Malloc0" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd1", 00:08:51.517 "bdev_name": "Malloc1" 00:08:51.517 } 00:08:51.517 ]' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.517 /dev/nbd1' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.517 /dev/nbd1' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@65 -- # count=2 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@95 -- # count=2 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.517 14:10:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:51.776 256+0 records in 00:08:51.776 256+0 records out 00:08:51.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111467 s, 94.1 MB/s 00:08:51.776 14:10:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.776 14:10:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.776 256+0 records in 00:08:51.776 256+0 records out 00:08:51.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241172 s, 43.5 MB/s 00:08:51.776 14:10:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.776 14:10:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.776 256+0 records in 00:08:51.776 256+0 records out 00:08:51.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291566 s, 36.0 MB/s 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@51 -- # local i 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.777 14:10:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@41 -- # break 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.036 14:10:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@41 -- # break 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.295 14:10:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.554 14:10:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@65 -- # true 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@65 -- # count=0 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@104 -- # count=0 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:52.555 14:10:44 -- bdev/nbd_common.sh@109 -- # return 0 00:08:52.555 14:10:44 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:52.813 14:10:44 -- event/event.sh@35 -- # sleep 3 00:08:53.072 [2024-11-18 14:10:45.077169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.072 [2024-11-18 14:10:45.137226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.072 [2024-11-18 14:10:45.137235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.331 [2024-11-18 14:10:45.207939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:53.331 [2024-11-18 14:10:45.208097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:55.864 14:10:47 -- event/event.sh@23 -- # for i in {0..2} 00:08:55.864 spdk_app_start Round 1 00:08:55.864 14:10:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:55.864 14:10:47 -- event/event.sh@25 -- # waitforlisten 115941 /var/tmp/spdk-nbd.sock 00:08:55.864 14:10:47 -- common/autotest_common.sh@829 -- # '[' -z 115941 ']' 00:08:55.864 14:10:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.864 14:10:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.864 14:10:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.864 14:10:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.864 14:10:47 -- common/autotest_common.sh@10 -- # set +x 00:08:56.122 14:10:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.122 14:10:48 -- common/autotest_common.sh@862 -- # return 0 00:08:56.122 14:10:48 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.411 Malloc0 00:08:56.411 14:10:48 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.714 Malloc1 00:08:56.714 14:10:48 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@12 -- # local i 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:56.714 /dev/nbd0 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:56.714 14:10:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:56.714 14:10:48 -- common/autotest_common.sh@867 -- # local i 00:08:56.714 14:10:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:56.714 14:10:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:56.714 14:10:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:56.714 14:10:48 -- common/autotest_common.sh@871 -- # break 00:08:56.714 14:10:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:56.714 14:10:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:56.714 14:10:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:56.714 1+0 records in 00:08:56.714 1+0 records out 00:08:56.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279897 s, 14.6 MB/s 00:08:56.714 14:10:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:56.714 14:10:48 -- common/autotest_common.sh@884 -- # size=4096 00:08:56.714 14:10:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:56.714 14:10:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:56.714 14:10:48 -- common/autotest_common.sh@887 -- # return 0 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:56.714 14:10:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:56.973 /dev/nbd1 00:08:57.231 14:10:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:57.231 14:10:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:57.231 14:10:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:57.231 14:10:49 -- common/autotest_common.sh@867 -- # local i 00:08:57.231 14:10:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:57.231 14:10:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:57.231 14:10:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:57.231 14:10:49 -- common/autotest_common.sh@871 -- # break 00:08:57.231 14:10:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:57.231 14:10:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:57.231 14:10:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.231 1+0 records in 00:08:57.231 1+0 records out 00:08:57.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249383 s, 16.4 MB/s 00:08:57.231 14:10:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.231 14:10:49 -- common/autotest_common.sh@884 -- # size=4096 00:08:57.231 14:10:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.231 14:10:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:57.231 14:10:49 -- common/autotest_common.sh@887 -- # return 0 00:08:57.231 14:10:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.231 14:10:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.231 14:10:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.232 14:10:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.232 14:10:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:57.491 { 00:08:57.491 "nbd_device": "/dev/nbd0", 00:08:57.491 "bdev_name": "Malloc0" 00:08:57.491 }, 00:08:57.491 { 00:08:57.491 "nbd_device": "/dev/nbd1", 00:08:57.491 "bdev_name": "Malloc1" 00:08:57.491 } 00:08:57.491 ]' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:57.491 { 00:08:57.491 "nbd_device": "/dev/nbd0", 00:08:57.491 "bdev_name": "Malloc0" 00:08:57.491 }, 00:08:57.491 { 00:08:57.491 "nbd_device": "/dev/nbd1", 00:08:57.491 "bdev_name": "Malloc1" 00:08:57.491 } 00:08:57.491 ]' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:57.491 /dev/nbd1' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:57.491 /dev/nbd1' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@65 -- # count=2 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@95 -- # count=2 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:57.491 256+0 records in 00:08:57.491 256+0 records out 00:08:57.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658002 s, 159 MB/s 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:57.491 256+0 records in 00:08:57.491 256+0 records out 00:08:57.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237046 s, 44.2 MB/s 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:57.491 256+0 records in 00:08:57.491 256+0 records out 00:08:57.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257524 s, 40.7 MB/s 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@51 -- # local i 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.491 14:10:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@41 -- # break 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.750 14:10:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@41 -- # break 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.009 14:10:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@65 -- # true 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@65 -- # count=0 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@104 -- # count=0 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:58.268 14:10:50 -- bdev/nbd_common.sh@109 -- # return 0 00:08:58.268 14:10:50 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:58.526 14:10:50 -- event/event.sh@35 -- # sleep 3 00:08:58.785 [2024-11-18 14:10:50.847730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.043 [2024-11-18 14:10:50.897465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.043 [2024-11-18 14:10:50.897472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.043 [2024-11-18 14:10:50.968558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:59.043 [2024-11-18 14:10:50.968658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:01.577 14:10:53 -- event/event.sh@23 -- # for i in {0..2} 00:09:01.577 spdk_app_start Round 2 00:09:01.577 14:10:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:01.577 14:10:53 -- event/event.sh@25 -- # waitforlisten 115941 /var/tmp/spdk-nbd.sock 00:09:01.577 14:10:53 -- common/autotest_common.sh@829 -- # '[' -z 115941 ']' 00:09:01.577 14:10:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:01.577 14:10:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:01.577 14:10:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:01.577 14:10:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.577 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:09:01.836 14:10:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.836 14:10:53 -- common/autotest_common.sh@862 -- # return 0 00:09:01.836 14:10:53 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:02.095 Malloc0 00:09:02.095 14:10:54 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:02.354 Malloc1 00:09:02.354 14:10:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@12 -- # local i 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.354 14:10:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:02.613 /dev/nbd0 00:09:02.613 14:10:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:02.613 14:10:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:02.613 14:10:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:02.613 14:10:54 -- common/autotest_common.sh@867 -- # local i 00:09:02.613 14:10:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.613 14:10:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.613 14:10:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:02.613 14:10:54 -- common/autotest_common.sh@871 -- # break 00:09:02.613 14:10:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.613 14:10:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.613 14:10:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:02.613 1+0 records in 00:09:02.613 1+0 records out 00:09:02.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219335 s, 18.7 MB/s 00:09:02.613 14:10:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.613 14:10:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.613 14:10:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.613 14:10:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.613 14:10:54 -- common/autotest_common.sh@887 -- # return 0 00:09:02.613 14:10:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.613 14:10:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.613 14:10:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:02.872 /dev/nbd1 00:09:02.872 14:10:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:02.872 14:10:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:02.872 14:10:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:02.872 14:10:54 -- common/autotest_common.sh@867 -- # local i 00:09:02.872 14:10:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.872 14:10:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.872 14:10:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:02.872 14:10:54 -- common/autotest_common.sh@871 -- # break 00:09:02.872 14:10:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.872 14:10:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.872 14:10:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:02.872 1+0 records in 00:09:02.872 1+0 records out 00:09:02.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801324 s, 5.1 MB/s 00:09:02.872 14:10:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.873 14:10:54 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.873 14:10:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:02.873 14:10:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.873 14:10:54 -- common/autotest_common.sh@887 -- # return 0 00:09:02.873 14:10:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:02.873 14:10:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:02.873 14:10:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:02.873 14:10:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.873 14:10:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.131 14:10:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:03.131 { 00:09:03.131 "nbd_device": "/dev/nbd0", 00:09:03.131 "bdev_name": "Malloc0" 00:09:03.131 }, 00:09:03.131 { 00:09:03.131 "nbd_device": "/dev/nbd1", 00:09:03.131 "bdev_name": "Malloc1" 00:09:03.131 } 00:09:03.131 ]' 00:09:03.131 14:10:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:03.131 { 00:09:03.131 "nbd_device": "/dev/nbd0", 00:09:03.131 "bdev_name": "Malloc0" 00:09:03.131 }, 00:09:03.131 { 00:09:03.131 "nbd_device": "/dev/nbd1", 00:09:03.131 "bdev_name": "Malloc1" 00:09:03.131 } 00:09:03.131 ]' 00:09:03.131 14:10:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:03.391 /dev/nbd1' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:03.391 /dev/nbd1' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@65 -- # count=2 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@95 -- # count=2 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:03.391 256+0 records in 00:09:03.391 256+0 records out 00:09:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111886 s, 93.7 MB/s 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:03.391 256+0 records in 00:09:03.391 256+0 records out 00:09:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025758 s, 40.7 MB/s 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:03.391 256+0 records in 00:09:03.391 256+0 records out 00:09:03.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272768 s, 38.4 MB/s 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@51 -- # local i 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.391 14:10:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@41 -- # break 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.651 14:10:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@41 -- # break 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.910 14:10:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@65 -- # true 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@65 -- # count=0 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@104 -- # count=0 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:04.169 14:10:56 -- bdev/nbd_common.sh@109 -- # return 0 00:09:04.169 14:10:56 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:04.426 14:10:56 -- event/event.sh@35 -- # sleep 3 00:09:04.683 [2024-11-18 14:10:56.564990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:04.683 [2024-11-18 14:10:56.615806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.683 [2024-11-18 14:10:56.615813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.683 [2024-11-18 14:10:56.689572] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:04.683 [2024-11-18 14:10:56.689723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:07.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:07.967 14:10:59 -- event/event.sh@38 -- # waitforlisten 115941 /var/tmp/spdk-nbd.sock 00:09:07.967 14:10:59 -- common/autotest_common.sh@829 -- # '[' -z 115941 ']' 00:09:07.967 14:10:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:07.967 14:10:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.967 14:10:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:07.967 14:10:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.967 14:10:59 -- common/autotest_common.sh@10 -- # set +x 00:09:07.967 14:10:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.967 14:10:59 -- common/autotest_common.sh@862 -- # return 0 00:09:07.967 14:10:59 -- event/event.sh@39 -- # killprocess 115941 00:09:07.967 14:10:59 -- common/autotest_common.sh@936 -- # '[' -z 115941 ']' 00:09:07.967 14:10:59 -- common/autotest_common.sh@940 -- # kill -0 115941 00:09:07.967 14:10:59 -- common/autotest_common.sh@941 -- # uname 00:09:07.967 14:10:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.967 14:10:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115941 00:09:07.967 14:10:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.967 14:10:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.967 14:10:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115941' 00:09:07.967 killing process with pid 115941 00:09:07.967 14:10:59 -- common/autotest_common.sh@955 -- # kill 115941 00:09:07.967 14:10:59 -- common/autotest_common.sh@960 -- # wait 115941 00:09:07.967 spdk_app_start is called in Round 0. 00:09:07.967 Shutdown signal received, stop current app iteration 00:09:07.967 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:09:07.967 spdk_app_start is called in Round 1. 00:09:07.967 Shutdown signal received, stop current app iteration 00:09:07.967 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:09:07.967 spdk_app_start is called in Round 2. 00:09:07.967 Shutdown signal received, stop current app iteration 00:09:07.967 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:09:07.967 spdk_app_start is called in Round 3. 00:09:07.967 Shutdown signal received, stop current app iteration 00:09:07.968 14:10:59 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:07.968 14:10:59 -- event/event.sh@42 -- # return 0 00:09:07.968 00:09:07.968 real 0m18.698s 00:09:07.968 user 0m41.564s 00:09:07.968 sys 0m2.859s 00:09:07.968 14:10:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:07.968 14:10:59 -- common/autotest_common.sh@10 -- # set +x 00:09:07.968 ************************************ 00:09:07.968 END TEST app_repeat 00:09:07.968 ************************************ 00:09:07.968 14:10:59 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:07.968 14:10:59 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:07.968 14:10:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:07.968 14:10:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:07.968 14:10:59 -- common/autotest_common.sh@10 -- # set +x 00:09:07.968 ************************************ 00:09:07.968 START TEST cpu_locks 00:09:07.968 ************************************ 00:09:07.968 14:11:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:08.226 * Looking for test storage... 00:09:08.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:08.226 14:11:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:08.226 14:11:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:08.226 14:11:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:08.226 14:11:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:08.226 14:11:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:08.226 14:11:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:08.226 14:11:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:08.226 14:11:00 -- scripts/common.sh@335 -- # IFS=.-: 00:09:08.226 14:11:00 -- scripts/common.sh@335 -- # read -ra ver1 00:09:08.226 14:11:00 -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.226 14:11:00 -- scripts/common.sh@336 -- # read -ra ver2 00:09:08.226 14:11:00 -- scripts/common.sh@337 -- # local 'op=<' 00:09:08.226 14:11:00 -- scripts/common.sh@339 -- # ver1_l=2 00:09:08.226 14:11:00 -- scripts/common.sh@340 -- # ver2_l=1 00:09:08.226 14:11:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:08.226 14:11:00 -- scripts/common.sh@343 -- # case "$op" in 00:09:08.226 14:11:00 -- scripts/common.sh@344 -- # : 1 00:09:08.226 14:11:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:08.226 14:11:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.226 14:11:00 -- scripts/common.sh@364 -- # decimal 1 00:09:08.226 14:11:00 -- scripts/common.sh@352 -- # local d=1 00:09:08.226 14:11:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.226 14:11:00 -- scripts/common.sh@354 -- # echo 1 00:09:08.226 14:11:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:08.226 14:11:00 -- scripts/common.sh@365 -- # decimal 2 00:09:08.226 14:11:00 -- scripts/common.sh@352 -- # local d=2 00:09:08.226 14:11:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.226 14:11:00 -- scripts/common.sh@354 -- # echo 2 00:09:08.226 14:11:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:08.226 14:11:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:08.226 14:11:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:08.226 14:11:00 -- scripts/common.sh@367 -- # return 0 00:09:08.226 14:11:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.226 14:11:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:08.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.226 --rc genhtml_branch_coverage=1 00:09:08.226 --rc genhtml_function_coverage=1 00:09:08.227 --rc genhtml_legend=1 00:09:08.227 --rc geninfo_all_blocks=1 00:09:08.227 --rc geninfo_unexecuted_blocks=1 00:09:08.227 00:09:08.227 ' 00:09:08.227 14:11:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:08.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.227 --rc genhtml_branch_coverage=1 00:09:08.227 --rc genhtml_function_coverage=1 00:09:08.227 --rc genhtml_legend=1 00:09:08.227 --rc geninfo_all_blocks=1 00:09:08.227 --rc geninfo_unexecuted_blocks=1 00:09:08.227 00:09:08.227 ' 00:09:08.227 14:11:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:08.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.227 --rc genhtml_branch_coverage=1 00:09:08.227 --rc genhtml_function_coverage=1 00:09:08.227 --rc genhtml_legend=1 00:09:08.227 --rc geninfo_all_blocks=1 00:09:08.227 --rc geninfo_unexecuted_blocks=1 00:09:08.227 00:09:08.227 ' 00:09:08.227 14:11:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:08.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.227 --rc genhtml_branch_coverage=1 00:09:08.227 --rc genhtml_function_coverage=1 00:09:08.227 --rc genhtml_legend=1 00:09:08.227 --rc geninfo_all_blocks=1 00:09:08.227 --rc geninfo_unexecuted_blocks=1 00:09:08.227 00:09:08.227 ' 00:09:08.227 14:11:00 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:08.227 14:11:00 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:08.227 14:11:00 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:08.227 14:11:00 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:08.227 14:11:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.227 14:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.227 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:09:08.227 ************************************ 00:09:08.227 START TEST default_locks 00:09:08.227 ************************************ 00:09:08.227 14:11:00 -- common/autotest_common.sh@1114 -- # default_locks 00:09:08.227 14:11:00 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116455 00:09:08.227 14:11:00 -- event/cpu_locks.sh@47 -- # waitforlisten 116455 00:09:08.227 14:11:00 -- common/autotest_common.sh@829 -- # '[' -z 116455 ']' 00:09:08.227 14:11:00 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.227 14:11:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.227 14:11:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.227 14:11:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.227 14:11:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.227 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:09:08.227 [2024-11-18 14:11:00.267978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:08.227 [2024-11-18 14:11:00.268971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116455 ] 00:09:08.485 [2024-11-18 14:11:00.422115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.485 [2024-11-18 14:11:00.523335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.485 [2024-11-18 14:11:00.523640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.420 14:11:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.420 14:11:01 -- common/autotest_common.sh@862 -- # return 0 00:09:09.420 14:11:01 -- event/cpu_locks.sh@49 -- # locks_exist 116455 00:09:09.420 14:11:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:09.420 14:11:01 -- event/cpu_locks.sh@22 -- # lslocks -p 116455 00:09:09.678 14:11:01 -- event/cpu_locks.sh@50 -- # killprocess 116455 00:09:09.678 14:11:01 -- common/autotest_common.sh@936 -- # '[' -z 116455 ']' 00:09:09.678 14:11:01 -- common/autotest_common.sh@940 -- # kill -0 116455 00:09:09.678 14:11:01 -- common/autotest_common.sh@941 -- # uname 00:09:09.678 14:11:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:09.678 14:11:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116455 00:09:09.678 14:11:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:09.678 14:11:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:09.678 killing process with pid 116455 00:09:09.678 14:11:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116455' 00:09:09.678 14:11:01 -- common/autotest_common.sh@955 -- # kill 116455 00:09:09.678 14:11:01 -- common/autotest_common.sh@960 -- # wait 116455 00:09:10.244 14:11:02 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 116455 00:09:10.244 14:11:02 -- common/autotest_common.sh@650 -- # local es=0 00:09:10.244 14:11:02 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116455 00:09:10.244 14:11:02 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:10.244 14:11:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.244 14:11:02 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:10.244 14:11:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.244 14:11:02 -- common/autotest_common.sh@653 -- # waitforlisten 116455 00:09:10.244 14:11:02 -- common/autotest_common.sh@829 -- # '[' -z 116455 ']' 00:09:10.244 14:11:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.244 14:11:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.244 14:11:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.244 14:11:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.244 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:09:10.244 ERROR: process (pid: 116455) is no longer running 00:09:10.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (116455) - No such process 00:09:10.244 14:11:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.244 14:11:02 -- common/autotest_common.sh@862 -- # return 1 00:09:10.244 14:11:02 -- common/autotest_common.sh@653 -- # es=1 00:09:10.244 14:11:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.245 14:11:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.245 14:11:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.245 14:11:02 -- event/cpu_locks.sh@54 -- # no_locks 00:09:10.245 14:11:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:10.245 14:11:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:10.245 14:11:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:10.245 ************************************ 00:09:10.245 END TEST default_locks 00:09:10.245 ************************************ 00:09:10.245 00:09:10.245 real 0m2.023s 00:09:10.245 user 0m1.930s 00:09:10.245 sys 0m0.803s 00:09:10.245 14:11:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.245 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:09:10.245 14:11:02 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:10.245 14:11:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:10.245 14:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.245 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:09:10.245 ************************************ 00:09:10.245 START TEST default_locks_via_rpc 00:09:10.245 ************************************ 00:09:10.245 14:11:02 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:09:10.245 14:11:02 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116516 00:09:10.245 14:11:02 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.245 14:11:02 -- event/cpu_locks.sh@63 -- # waitforlisten 116516 00:09:10.245 14:11:02 -- common/autotest_common.sh@829 -- # '[' -z 116516 ']' 00:09:10.245 14:11:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.245 14:11:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.245 14:11:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.245 14:11:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.245 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:09:10.502 [2024-11-18 14:11:02.328872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:10.502 [2024-11-18 14:11:02.329148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116516 ] 00:09:10.502 [2024-11-18 14:11:02.479734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.761 [2024-11-18 14:11:02.584755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:10.761 [2024-11-18 14:11:02.585056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.328 14:11:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.328 14:11:03 -- common/autotest_common.sh@862 -- # return 0 00:09:11.328 14:11:03 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:11.328 14:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.328 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 14:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.328 14:11:03 -- event/cpu_locks.sh@67 -- # no_locks 00:09:11.328 14:11:03 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:11.328 14:11:03 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:11.328 14:11:03 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:11.328 14:11:03 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:11.328 14:11:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.328 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:09:11.328 14:11:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.328 14:11:03 -- event/cpu_locks.sh@71 -- # locks_exist 116516 00:09:11.328 14:11:03 -- event/cpu_locks.sh@22 -- # lslocks -p 116516 00:09:11.328 14:11:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:11.586 14:11:03 -- event/cpu_locks.sh@73 -- # killprocess 116516 00:09:11.587 14:11:03 -- common/autotest_common.sh@936 -- # '[' -z 116516 ']' 00:09:11.587 14:11:03 -- common/autotest_common.sh@940 -- # kill -0 116516 00:09:11.587 14:11:03 -- common/autotest_common.sh@941 -- # uname 00:09:11.587 14:11:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.587 14:11:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116516 00:09:11.587 14:11:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.587 killing process with pid 116516 00:09:11.587 14:11:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.587 14:11:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116516' 00:09:11.587 14:11:03 -- common/autotest_common.sh@955 -- # kill 116516 00:09:11.587 14:11:03 -- common/autotest_common.sh@960 -- # wait 116516 00:09:12.521 00:09:12.521 real 0m2.040s 00:09:12.521 user 0m2.046s 00:09:12.521 sys 0m0.719s 00:09:12.521 14:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.521 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:09:12.521 ************************************ 00:09:12.521 END TEST default_locks_via_rpc 00:09:12.521 ************************************ 00:09:12.521 14:11:04 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:12.521 14:11:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.521 14:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.521 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:09:12.521 ************************************ 00:09:12.521 START TEST non_locking_app_on_locked_coremask 00:09:12.521 ************************************ 00:09:12.521 14:11:04 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:09:12.521 14:11:04 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116576 00:09:12.521 14:11:04 -- event/cpu_locks.sh@81 -- # waitforlisten 116576 /var/tmp/spdk.sock 00:09:12.521 14:11:04 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:12.521 14:11:04 -- common/autotest_common.sh@829 -- # '[' -z 116576 ']' 00:09:12.521 14:11:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.521 14:11:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.521 14:11:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.521 14:11:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.521 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:09:12.521 [2024-11-18 14:11:04.426040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:12.521 [2024-11-18 14:11:04.426300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116576 ] 00:09:12.521 [2024-11-18 14:11:04.572047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.780 [2024-11-18 14:11:04.671825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:12.780 [2024-11-18 14:11:04.672223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.717 14:11:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.717 14:11:05 -- common/autotest_common.sh@862 -- # return 0 00:09:13.717 14:11:05 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116599 00:09:13.717 14:11:05 -- event/cpu_locks.sh@85 -- # waitforlisten 116599 /var/tmp/spdk2.sock 00:09:13.717 14:11:05 -- common/autotest_common.sh@829 -- # '[' -z 116599 ']' 00:09:13.717 14:11:05 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:13.717 14:11:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:13.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:13.717 14:11:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.717 14:11:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:13.717 14:11:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.717 14:11:05 -- common/autotest_common.sh@10 -- # set +x 00:09:13.717 [2024-11-18 14:11:05.506035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:13.717 [2024-11-18 14:11:05.506280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116599 ] 00:09:13.717 [2024-11-18 14:11:05.665451] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:13.717 [2024-11-18 14:11:05.665529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.976 [2024-11-18 14:11:05.849828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.976 [2024-11-18 14:11:05.850084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.542 14:11:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.542 14:11:06 -- common/autotest_common.sh@862 -- # return 0 00:09:14.542 14:11:06 -- event/cpu_locks.sh@87 -- # locks_exist 116576 00:09:14.542 14:11:06 -- event/cpu_locks.sh@22 -- # lslocks -p 116576 00:09:14.542 14:11:06 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:15.109 14:11:06 -- event/cpu_locks.sh@89 -- # killprocess 116576 00:09:15.109 14:11:06 -- common/autotest_common.sh@936 -- # '[' -z 116576 ']' 00:09:15.109 14:11:06 -- common/autotest_common.sh@940 -- # kill -0 116576 00:09:15.109 14:11:06 -- common/autotest_common.sh@941 -- # uname 00:09:15.109 14:11:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:15.109 14:11:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116576 00:09:15.109 14:11:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:15.109 killing process with pid 116576 00:09:15.109 14:11:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:15.109 14:11:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116576' 00:09:15.109 14:11:06 -- common/autotest_common.sh@955 -- # kill 116576 00:09:15.109 14:11:06 -- common/autotest_common.sh@960 -- # wait 116576 00:09:16.044 14:11:08 -- event/cpu_locks.sh@90 -- # killprocess 116599 00:09:16.044 14:11:08 -- common/autotest_common.sh@936 -- # '[' -z 116599 ']' 00:09:16.044 14:11:08 -- common/autotest_common.sh@940 -- # kill -0 116599 00:09:16.044 14:11:08 -- common/autotest_common.sh@941 -- # uname 00:09:16.044 14:11:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.044 14:11:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116599 00:09:16.044 14:11:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:16.044 14:11:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:16.044 killing process with pid 116599 00:09:16.044 14:11:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116599' 00:09:16.044 14:11:08 -- common/autotest_common.sh@955 -- # kill 116599 00:09:16.044 14:11:08 -- common/autotest_common.sh@960 -- # wait 116599 00:09:16.611 00:09:16.611 real 0m4.218s 00:09:16.611 user 0m4.456s 00:09:16.611 sys 0m1.284s 00:09:16.611 ************************************ 00:09:16.611 END TEST non_locking_app_on_locked_coremask 00:09:16.611 ************************************ 00:09:16.611 14:11:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.611 14:11:08 -- common/autotest_common.sh@10 -- # set +x 00:09:16.611 14:11:08 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:16.611 14:11:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:16.611 14:11:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.611 14:11:08 -- common/autotest_common.sh@10 -- # set +x 00:09:16.611 ************************************ 00:09:16.611 START TEST locking_app_on_unlocked_coremask 00:09:16.611 ************************************ 00:09:16.611 14:11:08 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:09:16.611 14:11:08 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116680 00:09:16.611 14:11:08 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:16.611 14:11:08 -- event/cpu_locks.sh@99 -- # waitforlisten 116680 /var/tmp/spdk.sock 00:09:16.611 14:11:08 -- common/autotest_common.sh@829 -- # '[' -z 116680 ']' 00:09:16.611 14:11:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.611 14:11:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.611 14:11:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.611 14:11:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.611 14:11:08 -- common/autotest_common.sh@10 -- # set +x 00:09:16.611 [2024-11-18 14:11:08.682115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:16.612 [2024-11-18 14:11:08.682318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116680 ] 00:09:16.870 [2024-11-18 14:11:08.812397] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:16.870 [2024-11-18 14:11:08.812454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.870 [2024-11-18 14:11:08.894874] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.870 [2024-11-18 14:11:08.895147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:17.819 14:11:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.819 14:11:09 -- common/autotest_common.sh@862 -- # return 0 00:09:17.819 14:11:09 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:17.820 14:11:09 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=116701 00:09:17.820 14:11:09 -- event/cpu_locks.sh@103 -- # waitforlisten 116701 /var/tmp/spdk2.sock 00:09:17.820 14:11:09 -- common/autotest_common.sh@829 -- # '[' -z 116701 ']' 00:09:17.820 14:11:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:17.820 14:11:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.820 14:11:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:17.820 14:11:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.820 14:11:09 -- common/autotest_common.sh@10 -- # set +x 00:09:17.820 [2024-11-18 14:11:09.661400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:17.820 [2024-11-18 14:11:09.661961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116701 ] 00:09:17.820 [2024-11-18 14:11:09.811490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.079 [2024-11-18 14:11:09.963828] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:18.079 [2024-11-18 14:11:09.964086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.646 14:11:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.646 14:11:10 -- common/autotest_common.sh@862 -- # return 0 00:09:18.646 14:11:10 -- event/cpu_locks.sh@105 -- # locks_exist 116701 00:09:18.646 14:11:10 -- event/cpu_locks.sh@22 -- # lslocks -p 116701 00:09:18.646 14:11:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:18.905 14:11:10 -- event/cpu_locks.sh@107 -- # killprocess 116680 00:09:18.905 14:11:10 -- common/autotest_common.sh@936 -- # '[' -z 116680 ']' 00:09:18.905 14:11:10 -- common/autotest_common.sh@940 -- # kill -0 116680 00:09:18.905 14:11:10 -- common/autotest_common.sh@941 -- # uname 00:09:18.905 14:11:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:18.905 14:11:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116680 00:09:19.164 14:11:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:19.164 14:11:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:19.164 14:11:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116680' 00:09:19.164 killing process with pid 116680 00:09:19.164 14:11:10 -- common/autotest_common.sh@955 -- # kill 116680 00:09:19.164 14:11:10 -- common/autotest_common.sh@960 -- # wait 116680 00:09:20.100 14:11:12 -- event/cpu_locks.sh@108 -- # killprocess 116701 00:09:20.100 14:11:12 -- common/autotest_common.sh@936 -- # '[' -z 116701 ']' 00:09:20.100 14:11:12 -- common/autotest_common.sh@940 -- # kill -0 116701 00:09:20.100 14:11:12 -- common/autotest_common.sh@941 -- # uname 00:09:20.100 14:11:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:20.100 14:11:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116701 00:09:20.100 killing process with pid 116701 00:09:20.100 14:11:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:20.100 14:11:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:20.100 14:11:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116701' 00:09:20.100 14:11:12 -- common/autotest_common.sh@955 -- # kill 116701 00:09:20.100 14:11:12 -- common/autotest_common.sh@960 -- # wait 116701 00:09:20.668 ************************************ 00:09:20.668 END TEST locking_app_on_unlocked_coremask 00:09:20.668 ************************************ 00:09:20.668 00:09:20.668 real 0m4.010s 00:09:20.668 user 0m4.156s 00:09:20.668 sys 0m1.177s 00:09:20.668 14:11:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.668 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:09:20.668 14:11:12 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:20.668 14:11:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.668 14:11:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.668 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:09:20.668 ************************************ 00:09:20.668 START TEST locking_app_on_locked_coremask 00:09:20.668 ************************************ 00:09:20.668 14:11:12 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:09:20.668 14:11:12 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=116775 00:09:20.668 14:11:12 -- event/cpu_locks.sh@116 -- # waitforlisten 116775 /var/tmp/spdk.sock 00:09:20.668 14:11:12 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:20.668 14:11:12 -- common/autotest_common.sh@829 -- # '[' -z 116775 ']' 00:09:20.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.668 14:11:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.668 14:11:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.668 14:11:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.668 14:11:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.668 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:09:20.927 [2024-11-18 14:11:12.760176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:20.927 [2024-11-18 14:11:12.761335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116775 ] 00:09:20.927 [2024-11-18 14:11:12.904821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.927 [2024-11-18 14:11:12.986012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:20.927 [2024-11-18 14:11:12.986592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.862 14:11:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.862 14:11:13 -- common/autotest_common.sh@862 -- # return 0 00:09:21.862 14:11:13 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=116796 00:09:21.862 14:11:13 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:21.862 14:11:13 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 116796 /var/tmp/spdk2.sock 00:09:21.862 14:11:13 -- common/autotest_common.sh@650 -- # local es=0 00:09:21.862 14:11:13 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116796 /var/tmp/spdk2.sock 00:09:21.862 14:11:13 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:21.862 14:11:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.862 14:11:13 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:21.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:21.862 14:11:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.862 14:11:13 -- common/autotest_common.sh@653 -- # waitforlisten 116796 /var/tmp/spdk2.sock 00:09:21.862 14:11:13 -- common/autotest_common.sh@829 -- # '[' -z 116796 ']' 00:09:21.862 14:11:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:21.862 14:11:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.862 14:11:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:21.862 14:11:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.862 14:11:13 -- common/autotest_common.sh@10 -- # set +x 00:09:21.862 [2024-11-18 14:11:13.836153] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:21.862 [2024-11-18 14:11:13.836736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116796 ] 00:09:22.121 [2024-11-18 14:11:13.983194] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 116775 has claimed it. 00:09:22.121 [2024-11-18 14:11:13.995456] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:22.688 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (116796) - No such process 00:09:22.688 ERROR: process (pid: 116796) is no longer running 00:09:22.688 14:11:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.688 14:11:14 -- common/autotest_common.sh@862 -- # return 1 00:09:22.688 14:11:14 -- common/autotest_common.sh@653 -- # es=1 00:09:22.688 14:11:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.688 14:11:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.688 14:11:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.688 14:11:14 -- event/cpu_locks.sh@122 -- # locks_exist 116775 00:09:22.688 14:11:14 -- event/cpu_locks.sh@22 -- # lslocks -p 116775 00:09:22.688 14:11:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:22.948 14:11:14 -- event/cpu_locks.sh@124 -- # killprocess 116775 00:09:22.948 14:11:14 -- common/autotest_common.sh@936 -- # '[' -z 116775 ']' 00:09:22.948 14:11:14 -- common/autotest_common.sh@940 -- # kill -0 116775 00:09:22.948 14:11:14 -- common/autotest_common.sh@941 -- # uname 00:09:22.948 14:11:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:22.948 14:11:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116775 00:09:22.948 14:11:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:22.948 14:11:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:22.948 14:11:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116775' 00:09:22.948 killing process with pid 116775 00:09:22.948 14:11:14 -- common/autotest_common.sh@955 -- # kill 116775 00:09:22.948 14:11:14 -- common/autotest_common.sh@960 -- # wait 116775 00:09:23.515 00:09:23.515 real 0m2.814s 00:09:23.515 user 0m3.146s 00:09:23.515 sys 0m0.780s 00:09:23.515 14:11:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.515 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:09:23.515 ************************************ 00:09:23.515 END TEST locking_app_on_locked_coremask 00:09:23.515 ************************************ 00:09:23.515 14:11:15 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:23.515 14:11:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.515 14:11:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.515 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:09:23.515 ************************************ 00:09:23.515 START TEST locking_overlapped_coremask 00:09:23.515 ************************************ 00:09:23.515 14:11:15 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:09:23.515 14:11:15 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=116844 00:09:23.515 14:11:15 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:23.515 14:11:15 -- event/cpu_locks.sh@133 -- # waitforlisten 116844 /var/tmp/spdk.sock 00:09:23.515 14:11:15 -- common/autotest_common.sh@829 -- # '[' -z 116844 ']' 00:09:23.515 14:11:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.515 14:11:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.515 14:11:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.515 14:11:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.515 14:11:15 -- common/autotest_common.sh@10 -- # set +x 00:09:23.774 [2024-11-18 14:11:15.628605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:23.774 [2024-11-18 14:11:15.629088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116844 ] 00:09:23.774 [2024-11-18 14:11:15.784470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.033 [2024-11-18 14:11:15.876899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:24.033 [2024-11-18 14:11:15.877502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.033 [2024-11-18 14:11:15.877812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.033 [2024-11-18 14:11:15.877792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.601 14:11:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.601 14:11:16 -- common/autotest_common.sh@862 -- # return 0 00:09:24.601 14:11:16 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=116867 00:09:24.601 14:11:16 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 116867 /var/tmp/spdk2.sock 00:09:24.601 14:11:16 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:24.601 14:11:16 -- common/autotest_common.sh@650 -- # local es=0 00:09:24.601 14:11:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 116867 /var/tmp/spdk2.sock 00:09:24.601 14:11:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:24.601 14:11:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.601 14:11:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:24.601 14:11:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.601 14:11:16 -- common/autotest_common.sh@653 -- # waitforlisten 116867 /var/tmp/spdk2.sock 00:09:24.601 14:11:16 -- common/autotest_common.sh@829 -- # '[' -z 116867 ']' 00:09:24.601 14:11:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:24.601 14:11:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.601 14:11:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:24.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:24.601 14:11:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.601 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:09:24.601 [2024-11-18 14:11:16.641759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:24.601 [2024-11-18 14:11:16.642320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116867 ] 00:09:24.860 [2024-11-18 14:11:16.803400] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 116844 has claimed it. 00:09:24.860 [2024-11-18 14:11:16.819213] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:25.428 ERROR: process (pid: 116867) is no longer running 00:09:25.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (116867) - No such process 00:09:25.428 14:11:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.428 14:11:17 -- common/autotest_common.sh@862 -- # return 1 00:09:25.428 14:11:17 -- common/autotest_common.sh@653 -- # es=1 00:09:25.428 14:11:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.428 14:11:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.428 14:11:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.428 14:11:17 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:25.428 14:11:17 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:25.428 14:11:17 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:25.428 14:11:17 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:25.428 14:11:17 -- event/cpu_locks.sh@141 -- # killprocess 116844 00:09:25.428 14:11:17 -- common/autotest_common.sh@936 -- # '[' -z 116844 ']' 00:09:25.428 14:11:17 -- common/autotest_common.sh@940 -- # kill -0 116844 00:09:25.428 14:11:17 -- common/autotest_common.sh@941 -- # uname 00:09:25.428 14:11:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:25.428 14:11:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116844 00:09:25.428 14:11:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:25.428 14:11:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:25.428 14:11:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116844' 00:09:25.428 killing process with pid 116844 00:09:25.428 14:11:17 -- common/autotest_common.sh@955 -- # kill 116844 00:09:25.428 14:11:17 -- common/autotest_common.sh@960 -- # wait 116844 00:09:25.995 ************************************ 00:09:25.995 END TEST locking_overlapped_coremask 00:09:25.995 ************************************ 00:09:25.995 00:09:25.995 real 0m2.418s 00:09:25.995 user 0m6.383s 00:09:25.995 sys 0m0.648s 00:09:25.995 14:11:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:25.995 14:11:17 -- common/autotest_common.sh@10 -- # set +x 00:09:25.995 14:11:18 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:25.995 14:11:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:25.996 14:11:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.996 14:11:18 -- common/autotest_common.sh@10 -- # set +x 00:09:25.996 ************************************ 00:09:25.996 START TEST locking_overlapped_coremask_via_rpc 00:09:25.996 ************************************ 00:09:25.996 14:11:18 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:09:25.996 14:11:18 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=116925 00:09:25.996 14:11:18 -- event/cpu_locks.sh@149 -- # waitforlisten 116925 /var/tmp/spdk.sock 00:09:25.996 14:11:18 -- common/autotest_common.sh@829 -- # '[' -z 116925 ']' 00:09:25.996 14:11:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.996 14:11:18 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:25.996 14:11:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.996 14:11:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.996 14:11:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.996 14:11:18 -- common/autotest_common.sh@10 -- # set +x 00:09:26.255 [2024-11-18 14:11:18.096858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:26.255 [2024-11-18 14:11:18.097126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116925 ] 00:09:26.255 [2024-11-18 14:11:18.254146] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:26.255 [2024-11-18 14:11:18.254344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:26.514 [2024-11-18 14:11:18.344227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:26.514 [2024-11-18 14:11:18.344810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.514 [2024-11-18 14:11:18.345111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.514 [2024-11-18 14:11:18.345112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.082 14:11:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.082 14:11:19 -- common/autotest_common.sh@862 -- # return 0 00:09:27.082 14:11:19 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=116942 00:09:27.082 14:11:19 -- event/cpu_locks.sh@153 -- # waitforlisten 116942 /var/tmp/spdk2.sock 00:09:27.082 14:11:19 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:27.082 14:11:19 -- common/autotest_common.sh@829 -- # '[' -z 116942 ']' 00:09:27.082 14:11:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:27.082 14:11:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:27.082 14:11:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:27.082 14:11:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.082 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:09:27.082 [2024-11-18 14:11:19.088619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:27.082 [2024-11-18 14:11:19.088895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116942 ] 00:09:27.341 [2024-11-18 14:11:19.272986] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:27.341 [2024-11-18 14:11:19.273284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.600 [2024-11-18 14:11:19.503988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:27.600 [2024-11-18 14:11:19.519788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.600 [2024-11-18 14:11:19.519971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.600 [2024-11-18 14:11:19.519972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:28.978 14:11:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.978 14:11:20 -- common/autotest_common.sh@862 -- # return 0 00:09:28.978 14:11:20 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:28.978 14:11:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.978 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:09:28.978 14:11:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.978 14:11:20 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:28.978 14:11:20 -- common/autotest_common.sh@650 -- # local es=0 00:09:28.978 14:11:20 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:28.978 14:11:20 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:28.978 14:11:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.978 14:11:20 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:28.978 14:11:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.978 14:11:20 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:28.978 14:11:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.978 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:09:28.978 [2024-11-18 14:11:20.787540] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 116925 has claimed it. 00:09:28.978 request: 00:09:28.978 { 00:09:28.978 "method": "framework_enable_cpumask_locks", 00:09:28.978 "req_id": 1 00:09:28.978 } 00:09:28.978 Got JSON-RPC error response 00:09:28.978 response: 00:09:28.978 { 00:09:28.978 "code": -32603, 00:09:28.978 "message": "Failed to claim CPU core: 2" 00:09:28.978 } 00:09:28.978 14:11:20 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:28.978 14:11:20 -- common/autotest_common.sh@653 -- # es=1 00:09:28.978 14:11:20 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.978 14:11:20 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:28.978 14:11:20 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.978 14:11:20 -- event/cpu_locks.sh@158 -- # waitforlisten 116925 /var/tmp/spdk.sock 00:09:28.978 14:11:20 -- common/autotest_common.sh@829 -- # '[' -z 116925 ']' 00:09:28.978 14:11:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.978 14:11:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.978 14:11:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.978 14:11:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.978 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:09:28.978 14:11:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.978 14:11:21 -- common/autotest_common.sh@862 -- # return 0 00:09:28.978 14:11:21 -- event/cpu_locks.sh@159 -- # waitforlisten 116942 /var/tmp/spdk2.sock 00:09:28.978 14:11:21 -- common/autotest_common.sh@829 -- # '[' -z 116942 ']' 00:09:28.978 14:11:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.978 14:11:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.978 14:11:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.978 14:11:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.978 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:09:29.247 14:11:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.247 14:11:21 -- common/autotest_common.sh@862 -- # return 0 00:09:29.247 14:11:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:29.247 14:11:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:29.247 14:11:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:29.247 14:11:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:29.247 00:09:29.247 real 0m3.195s 00:09:29.247 user 0m1.411s 00:09:29.247 sys 0m0.230s 00:09:29.247 14:11:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.247 ************************************ 00:09:29.247 END TEST locking_overlapped_coremask_via_rpc 00:09:29.247 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:09:29.247 ************************************ 00:09:29.247 14:11:21 -- event/cpu_locks.sh@174 -- # cleanup 00:09:29.247 14:11:21 -- event/cpu_locks.sh@15 -- # [[ -z 116925 ]] 00:09:29.247 14:11:21 -- event/cpu_locks.sh@15 -- # killprocess 116925 00:09:29.247 14:11:21 -- common/autotest_common.sh@936 -- # '[' -z 116925 ']' 00:09:29.247 14:11:21 -- common/autotest_common.sh@940 -- # kill -0 116925 00:09:29.247 14:11:21 -- common/autotest_common.sh@941 -- # uname 00:09:29.247 14:11:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:29.247 14:11:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116925 00:09:29.247 14:11:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:29.247 14:11:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:29.247 killing process with pid 116925 00:09:29.247 14:11:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116925' 00:09:29.247 14:11:21 -- common/autotest_common.sh@955 -- # kill 116925 00:09:29.247 14:11:21 -- common/autotest_common.sh@960 -- # wait 116925 00:09:30.185 14:11:21 -- event/cpu_locks.sh@16 -- # [[ -z 116942 ]] 00:09:30.185 14:11:21 -- event/cpu_locks.sh@16 -- # killprocess 116942 00:09:30.185 14:11:21 -- common/autotest_common.sh@936 -- # '[' -z 116942 ']' 00:09:30.185 14:11:21 -- common/autotest_common.sh@940 -- # kill -0 116942 00:09:30.185 14:11:21 -- common/autotest_common.sh@941 -- # uname 00:09:30.185 14:11:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.185 14:11:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116942 00:09:30.185 14:11:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:30.185 14:11:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:30.185 killing process with pid 116942 00:09:30.185 14:11:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116942' 00:09:30.185 14:11:21 -- common/autotest_common.sh@955 -- # kill 116942 00:09:30.185 14:11:21 -- common/autotest_common.sh@960 -- # wait 116942 00:09:30.753 14:11:22 -- event/cpu_locks.sh@18 -- # rm -f 00:09:30.753 14:11:22 -- event/cpu_locks.sh@1 -- # cleanup 00:09:30.753 14:11:22 -- event/cpu_locks.sh@15 -- # [[ -z 116925 ]] 00:09:30.753 14:11:22 -- event/cpu_locks.sh@15 -- # killprocess 116925 00:09:30.753 14:11:22 -- common/autotest_common.sh@936 -- # '[' -z 116925 ']' 00:09:30.753 14:11:22 -- common/autotest_common.sh@940 -- # kill -0 116925 00:09:30.753 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (116925) - No such process 00:09:30.753 Process with pid 116925 is not found 00:09:30.753 14:11:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 116925 is not found' 00:09:30.753 14:11:22 -- event/cpu_locks.sh@16 -- # [[ -z 116942 ]] 00:09:30.753 14:11:22 -- event/cpu_locks.sh@16 -- # killprocess 116942 00:09:30.753 14:11:22 -- common/autotest_common.sh@936 -- # '[' -z 116942 ']' 00:09:30.753 14:11:22 -- common/autotest_common.sh@940 -- # kill -0 116942 00:09:30.753 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (116942) - No such process 00:09:30.753 Process with pid 116942 is not found 00:09:30.753 14:11:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 116942 is not found' 00:09:30.753 14:11:22 -- event/cpu_locks.sh@18 -- # rm -f 00:09:30.753 00:09:30.753 real 0m22.522s 00:09:30.753 user 0m40.128s 00:09:30.753 sys 0m6.809s 00:09:30.753 14:11:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:30.753 14:11:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.753 ************************************ 00:09:30.753 END TEST cpu_locks 00:09:30.753 ************************************ 00:09:30.753 00:09:30.753 real 0m50.291s 00:09:30.753 user 1m36.441s 00:09:30.753 sys 0m10.560s 00:09:30.753 14:11:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:30.753 14:11:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.753 ************************************ 00:09:30.753 END TEST event 00:09:30.753 ************************************ 00:09:30.753 14:11:22 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:30.753 14:11:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:30.753 14:11:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.753 14:11:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.753 ************************************ 00:09:30.753 START TEST thread 00:09:30.753 ************************************ 00:09:30.753 14:11:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:30.753 * Looking for test storage... 00:09:30.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:30.753 14:11:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:30.753 14:11:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:30.753 14:11:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:30.753 14:11:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:30.753 14:11:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:30.753 14:11:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:30.753 14:11:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:30.753 14:11:22 -- scripts/common.sh@335 -- # IFS=.-: 00:09:30.753 14:11:22 -- scripts/common.sh@335 -- # read -ra ver1 00:09:30.753 14:11:22 -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.753 14:11:22 -- scripts/common.sh@336 -- # read -ra ver2 00:09:30.753 14:11:22 -- scripts/common.sh@337 -- # local 'op=<' 00:09:30.753 14:11:22 -- scripts/common.sh@339 -- # ver1_l=2 00:09:30.753 14:11:22 -- scripts/common.sh@340 -- # ver2_l=1 00:09:30.753 14:11:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:30.754 14:11:22 -- scripts/common.sh@343 -- # case "$op" in 00:09:30.754 14:11:22 -- scripts/common.sh@344 -- # : 1 00:09:30.754 14:11:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:30.754 14:11:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.754 14:11:22 -- scripts/common.sh@364 -- # decimal 1 00:09:30.754 14:11:22 -- scripts/common.sh@352 -- # local d=1 00:09:30.754 14:11:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.754 14:11:22 -- scripts/common.sh@354 -- # echo 1 00:09:30.754 14:11:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:30.754 14:11:22 -- scripts/common.sh@365 -- # decimal 2 00:09:30.754 14:11:22 -- scripts/common.sh@352 -- # local d=2 00:09:30.754 14:11:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.754 14:11:22 -- scripts/common.sh@354 -- # echo 2 00:09:30.754 14:11:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:30.754 14:11:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:30.754 14:11:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:30.754 14:11:22 -- scripts/common.sh@367 -- # return 0 00:09:30.754 14:11:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.754 14:11:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.754 --rc genhtml_branch_coverage=1 00:09:30.754 --rc genhtml_function_coverage=1 00:09:30.754 --rc genhtml_legend=1 00:09:30.754 --rc geninfo_all_blocks=1 00:09:30.754 --rc geninfo_unexecuted_blocks=1 00:09:30.754 00:09:30.754 ' 00:09:30.754 14:11:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.754 --rc genhtml_branch_coverage=1 00:09:30.754 --rc genhtml_function_coverage=1 00:09:30.754 --rc genhtml_legend=1 00:09:30.754 --rc geninfo_all_blocks=1 00:09:30.754 --rc geninfo_unexecuted_blocks=1 00:09:30.754 00:09:30.754 ' 00:09:30.754 14:11:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.754 --rc genhtml_branch_coverage=1 00:09:30.754 --rc genhtml_function_coverage=1 00:09:30.754 --rc genhtml_legend=1 00:09:30.754 --rc geninfo_all_blocks=1 00:09:30.754 --rc geninfo_unexecuted_blocks=1 00:09:30.754 00:09:30.754 ' 00:09:30.754 14:11:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:30.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.754 --rc genhtml_branch_coverage=1 00:09:30.754 --rc genhtml_function_coverage=1 00:09:30.754 --rc genhtml_legend=1 00:09:30.754 --rc geninfo_all_blocks=1 00:09:30.754 --rc geninfo_unexecuted_blocks=1 00:09:30.754 00:09:30.754 ' 00:09:30.754 14:11:22 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:30.754 14:11:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:30.754 14:11:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.754 14:11:22 -- common/autotest_common.sh@10 -- # set +x 00:09:30.754 ************************************ 00:09:30.754 START TEST thread_poller_perf 00:09:30.754 ************************************ 00:09:30.754 14:11:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:31.013 [2024-11-18 14:11:22.825817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:31.013 [2024-11-18 14:11:22.826053] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117094 ] 00:09:31.013 [2024-11-18 14:11:22.966885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.013 [2024-11-18 14:11:23.035406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.013 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:32.393 [2024-11-18T14:11:24.467Z] ====================================== 00:09:32.393 [2024-11-18T14:11:24.467Z] busy:2210588950 (cyc) 00:09:32.393 [2024-11-18T14:11:24.467Z] total_run_count: 380000 00:09:32.393 [2024-11-18T14:11:24.467Z] tsc_hz: 2200000000 (cyc) 00:09:32.393 [2024-11-18T14:11:24.467Z] ====================================== 00:09:32.393 [2024-11-18T14:11:24.467Z] poller_cost: 5817 (cyc), 2644 (nsec) 00:09:32.393 00:09:32.393 real 0m1.343s 00:09:32.393 user 0m1.151s 00:09:32.393 sys 0m0.092s 00:09:32.393 14:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.393 14:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:32.393 ************************************ 00:09:32.393 END TEST thread_poller_perf 00:09:32.393 ************************************ 00:09:32.393 14:11:24 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:32.393 14:11:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:32.393 14:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.393 14:11:24 -- common/autotest_common.sh@10 -- # set +x 00:09:32.393 ************************************ 00:09:32.393 START TEST thread_poller_perf 00:09:32.393 ************************************ 00:09:32.393 14:11:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:32.393 [2024-11-18 14:11:24.223529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:32.393 [2024-11-18 14:11:24.223763] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117139 ] 00:09:32.393 [2024-11-18 14:11:24.370469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.393 [2024-11-18 14:11:24.446286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.393 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:33.770 [2024-11-18T14:11:25.844Z] ====================================== 00:09:33.770 [2024-11-18T14:11:25.844Z] busy:2204929728 (cyc) 00:09:33.770 [2024-11-18T14:11:25.844Z] total_run_count: 4854000 00:09:33.770 [2024-11-18T14:11:25.844Z] tsc_hz: 2200000000 (cyc) 00:09:33.770 [2024-11-18T14:11:25.844Z] ====================================== 00:09:33.770 [2024-11-18T14:11:25.844Z] poller_cost: 454 (cyc), 206 (nsec) 00:09:33.770 00:09:33.770 real 0m1.360s 00:09:33.770 user 0m1.157s 00:09:33.770 sys 0m0.102s 00:09:33.770 14:11:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.770 14:11:25 -- common/autotest_common.sh@10 -- # set +x 00:09:33.770 ************************************ 00:09:33.770 END TEST thread_poller_perf 00:09:33.770 ************************************ 00:09:33.770 14:11:25 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:33.770 14:11:25 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:33.770 14:11:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.770 14:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.770 14:11:25 -- common/autotest_common.sh@10 -- # set +x 00:09:33.770 ************************************ 00:09:33.770 START TEST thread_spdk_lock 00:09:33.770 ************************************ 00:09:33.770 14:11:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:33.770 [2024-11-18 14:11:25.637457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:33.770 [2024-11-18 14:11:25.637743] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117183 ] 00:09:33.770 [2024-11-18 14:11:25.786398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:34.029 [2024-11-18 14:11:25.857957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.029 [2024-11-18 14:11:25.857957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.597 [2024-11-18 14:11:26.493897] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:34.597 [2024-11-18 14:11:26.494008] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:34.597 [2024-11-18 14:11:26.494068] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x558fc1b90980 00:09:34.597 [2024-11-18 14:11:26.495467] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:34.597 [2024-11-18 14:11:26.495591] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:34.597 [2024-11-18 14:11:26.495658] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:34.597 Starting test contend 00:09:34.597 Worker Delay Wait us Hold us Total us 00:09:34.597 0 3 121857 199745 321603 00:09:34.597 1 5 32670 315688 348358 00:09:34.597 PASS test contend 00:09:34.597 Starting test hold_by_poller 00:09:34.597 PASS test hold_by_poller 00:09:34.597 Starting test hold_by_message 00:09:34.597 PASS test hold_by_message 00:09:34.597 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:34.597 100014 assertions passed 00:09:34.597 0 assertions failed 00:09:34.597 00:09:34.597 real 0m1.014s 00:09:34.597 user 0m1.470s 00:09:34.597 sys 0m0.083s 00:09:34.597 14:11:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.597 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 ************************************ 00:09:34.597 END TEST thread_spdk_lock 00:09:34.597 ************************************ 00:09:34.597 00:09:34.597 real 0m4.037s 00:09:34.597 user 0m3.980s 00:09:34.597 sys 0m0.394s 00:09:34.597 14:11:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.597 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:09:34.597 ************************************ 00:09:34.597 END TEST thread 00:09:34.597 ************************************ 00:09:34.857 14:11:26 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:34.857 14:11:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:34.857 14:11:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.857 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:09:34.857 ************************************ 00:09:34.857 START TEST accel 00:09:34.857 ************************************ 00:09:34.857 14:11:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:34.857 * Looking for test storage... 00:09:34.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:34.857 14:11:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:34.857 14:11:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:34.857 14:11:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:34.857 14:11:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:34.857 14:11:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:34.857 14:11:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:34.857 14:11:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:34.857 14:11:26 -- scripts/common.sh@335 -- # IFS=.-: 00:09:34.857 14:11:26 -- scripts/common.sh@335 -- # read -ra ver1 00:09:34.857 14:11:26 -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.857 14:11:26 -- scripts/common.sh@336 -- # read -ra ver2 00:09:34.857 14:11:26 -- scripts/common.sh@337 -- # local 'op=<' 00:09:34.857 14:11:26 -- scripts/common.sh@339 -- # ver1_l=2 00:09:34.857 14:11:26 -- scripts/common.sh@340 -- # ver2_l=1 00:09:34.857 14:11:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:34.857 14:11:26 -- scripts/common.sh@343 -- # case "$op" in 00:09:34.857 14:11:26 -- scripts/common.sh@344 -- # : 1 00:09:34.857 14:11:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:34.857 14:11:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.857 14:11:26 -- scripts/common.sh@364 -- # decimal 1 00:09:34.857 14:11:26 -- scripts/common.sh@352 -- # local d=1 00:09:34.857 14:11:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.857 14:11:26 -- scripts/common.sh@354 -- # echo 1 00:09:34.857 14:11:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:34.857 14:11:26 -- scripts/common.sh@365 -- # decimal 2 00:09:34.857 14:11:26 -- scripts/common.sh@352 -- # local d=2 00:09:34.857 14:11:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.857 14:11:26 -- scripts/common.sh@354 -- # echo 2 00:09:34.857 14:11:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:34.857 14:11:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:34.857 14:11:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:34.857 14:11:26 -- scripts/common.sh@367 -- # return 0 00:09:34.857 14:11:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.857 14:11:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.857 --rc genhtml_branch_coverage=1 00:09:34.857 --rc genhtml_function_coverage=1 00:09:34.857 --rc genhtml_legend=1 00:09:34.857 --rc geninfo_all_blocks=1 00:09:34.857 --rc geninfo_unexecuted_blocks=1 00:09:34.857 00:09:34.857 ' 00:09:34.857 14:11:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.857 --rc genhtml_branch_coverage=1 00:09:34.857 --rc genhtml_function_coverage=1 00:09:34.857 --rc genhtml_legend=1 00:09:34.857 --rc geninfo_all_blocks=1 00:09:34.857 --rc geninfo_unexecuted_blocks=1 00:09:34.857 00:09:34.857 ' 00:09:34.857 14:11:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.857 --rc genhtml_branch_coverage=1 00:09:34.857 --rc genhtml_function_coverage=1 00:09:34.857 --rc genhtml_legend=1 00:09:34.857 --rc geninfo_all_blocks=1 00:09:34.857 --rc geninfo_unexecuted_blocks=1 00:09:34.857 00:09:34.857 ' 00:09:34.857 14:11:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.857 --rc genhtml_branch_coverage=1 00:09:34.857 --rc genhtml_function_coverage=1 00:09:34.857 --rc genhtml_legend=1 00:09:34.857 --rc geninfo_all_blocks=1 00:09:34.857 --rc geninfo_unexecuted_blocks=1 00:09:34.857 00:09:34.857 ' 00:09:34.857 14:11:26 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:34.857 14:11:26 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:34.857 14:11:26 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:34.857 14:11:26 -- accel/accel.sh@59 -- # spdk_tgt_pid=117270 00:09:34.857 14:11:26 -- accel/accel.sh@60 -- # waitforlisten 117270 00:09:34.857 14:11:26 -- common/autotest_common.sh@829 -- # '[' -z 117270 ']' 00:09:34.857 14:11:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.857 14:11:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.857 14:11:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.857 14:11:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.857 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:09:34.857 14:11:26 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:34.857 14:11:26 -- accel/accel.sh@58 -- # build_accel_config 00:09:34.857 14:11:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:34.857 14:11:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:34.857 14:11:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.857 14:11:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:34.857 14:11:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:34.857 14:11:26 -- accel/accel.sh@41 -- # local IFS=, 00:09:34.857 14:11:26 -- accel/accel.sh@42 -- # jq -r . 00:09:35.126 [2024-11-18 14:11:26.953529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:35.126 [2024-11-18 14:11:26.953820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117270 ] 00:09:35.126 [2024-11-18 14:11:27.095263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.126 [2024-11-18 14:11:27.159296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.126 [2024-11-18 14:11:27.159580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.078 14:11:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.078 14:11:27 -- common/autotest_common.sh@862 -- # return 0 00:09:36.078 14:11:27 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:36.078 14:11:27 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:36.078 14:11:27 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:36.078 14:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.078 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:09:36.078 14:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # IFS== 00:09:36.078 14:11:27 -- accel/accel.sh@64 -- # read -r opc module 00:09:36.078 14:11:27 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:36.078 14:11:27 -- accel/accel.sh@67 -- # killprocess 117270 00:09:36.078 14:11:27 -- common/autotest_common.sh@936 -- # '[' -z 117270 ']' 00:09:36.078 14:11:27 -- common/autotest_common.sh@940 -- # kill -0 117270 00:09:36.078 14:11:27 -- common/autotest_common.sh@941 -- # uname 00:09:36.078 14:11:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:36.078 14:11:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 117270 00:09:36.078 14:11:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:36.078 14:11:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:36.078 14:11:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 117270' 00:09:36.078 killing process with pid 117270 00:09:36.078 14:11:27 -- common/autotest_common.sh@955 -- # kill 117270 00:09:36.078 14:11:27 -- common/autotest_common.sh@960 -- # wait 117270 00:09:36.645 14:11:28 -- accel/accel.sh@68 -- # trap - ERR 00:09:36.645 14:11:28 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:36.645 14:11:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:36.645 14:11:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.645 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.645 14:11:28 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:09:36.645 14:11:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:36.645 14:11:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:36.645 14:11:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:36.645 14:11:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.645 14:11:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.645 14:11:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:36.645 14:11:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:36.645 14:11:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:36.645 14:11:28 -- accel/accel.sh@42 -- # jq -r . 00:09:36.645 14:11:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.645 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.645 14:11:28 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:36.645 14:11:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:36.645 14:11:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.645 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.645 ************************************ 00:09:36.645 START TEST accel_missing_filename 00:09:36.645 ************************************ 00:09:36.645 14:11:28 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:09:36.645 14:11:28 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.645 14:11:28 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:36.645 14:11:28 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:36.645 14:11:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.645 14:11:28 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:36.645 14:11:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.645 14:11:28 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:09:36.645 14:11:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:36.645 14:11:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:36.645 14:11:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:36.645 14:11:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.645 14:11:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.645 14:11:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:36.645 14:11:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:36.645 14:11:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:36.645 14:11:28 -- accel/accel.sh@42 -- # jq -r . 00:09:36.645 [2024-11-18 14:11:28.648883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:36.645 [2024-11-18 14:11:28.649347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117333 ] 00:09:36.903 [2024-11-18 14:11:28.794987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.903 [2024-11-18 14:11:28.883175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.904 [2024-11-18 14:11:28.956560] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.163 [2024-11-18 14:11:29.072078] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:37.163 A filename is required. 00:09:37.163 14:11:29 -- common/autotest_common.sh@653 -- # es=234 00:09:37.163 14:11:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.163 14:11:29 -- common/autotest_common.sh@662 -- # es=106 00:09:37.163 14:11:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:37.163 14:11:29 -- common/autotest_common.sh@670 -- # es=1 00:09:37.163 14:11:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.163 00:09:37.163 real 0m0.589s 00:09:37.163 user 0m0.361s 00:09:37.163 sys 0m0.177s 00:09:37.163 14:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.163 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:37.163 ************************************ 00:09:37.163 END TEST accel_missing_filename 00:09:37.163 ************************************ 00:09:37.422 14:11:29 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:37.422 14:11:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:37.422 14:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.422 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:37.422 ************************************ 00:09:37.422 START TEST accel_compress_verify 00:09:37.422 ************************************ 00:09:37.422 14:11:29 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:37.422 14:11:29 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.422 14:11:29 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:37.422 14:11:29 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:37.422 14:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.422 14:11:29 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:37.422 14:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.422 14:11:29 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:37.422 14:11:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:37.423 14:11:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:37.423 14:11:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.423 14:11:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.423 14:11:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.423 14:11:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.423 14:11:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.423 14:11:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.423 14:11:29 -- accel/accel.sh@42 -- # jq -r . 00:09:37.423 [2024-11-18 14:11:29.289390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:37.423 [2024-11-18 14:11:29.289635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117372 ] 00:09:37.423 [2024-11-18 14:11:29.434812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.682 [2024-11-18 14:11:29.507075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.682 [2024-11-18 14:11:29.582199] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.682 [2024-11-18 14:11:29.695634] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:37.941 00:09:37.941 Compression does not support the verify option, aborting. 00:09:37.941 14:11:29 -- common/autotest_common.sh@653 -- # es=161 00:09:37.941 14:11:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.941 14:11:29 -- common/autotest_common.sh@662 -- # es=33 00:09:37.941 14:11:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:37.941 14:11:29 -- common/autotest_common.sh@670 -- # es=1 00:09:37.941 14:11:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.941 00:09:37.941 real 0m0.571s 00:09:37.941 user 0m0.342s 00:09:37.941 sys 0m0.182s 00:09:37.941 14:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.941 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:37.941 ************************************ 00:09:37.941 END TEST accel_compress_verify 00:09:37.941 ************************************ 00:09:37.941 14:11:29 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:37.941 14:11:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:37.941 14:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.941 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:37.941 ************************************ 00:09:37.941 START TEST accel_wrong_workload 00:09:37.941 ************************************ 00:09:37.941 14:11:29 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:09:37.941 14:11:29 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.941 14:11:29 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:37.941 14:11:29 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:37.941 14:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.941 14:11:29 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:37.941 14:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.941 14:11:29 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:09:37.941 14:11:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:37.941 14:11:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:37.941 14:11:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.941 14:11:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.941 14:11:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.941 14:11:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.941 14:11:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.941 14:11:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.941 14:11:29 -- accel/accel.sh@42 -- # jq -r . 00:09:37.941 Unsupported workload type: foobar 00:09:37.941 [2024-11-18 14:11:29.911100] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:37.941 accel_perf options: 00:09:37.941 [-h help message] 00:09:37.941 [-q queue depth per core] 00:09:37.941 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:37.941 [-T number of threads per core 00:09:37.941 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:37.941 [-t time in seconds] 00:09:37.941 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:37.941 [ dif_verify, , dif_generate, dif_generate_copy 00:09:37.941 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:37.941 [-l for compress/decompress workloads, name of uncompressed input file 00:09:37.941 [-S for crc32c workload, use this seed value (default 0) 00:09:37.941 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:37.941 [-f for fill workload, use this BYTE value (default 255) 00:09:37.941 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:37.941 [-y verify result if this switch is on] 00:09:37.941 [-a tasks to allocate per core (default: same value as -q)] 00:09:37.941 Can be used to spread operations across a wider range of memory. 00:09:37.941 14:11:29 -- common/autotest_common.sh@653 -- # es=1 00:09:37.941 14:11:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.941 14:11:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.941 14:11:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.941 00:09:37.941 real 0m0.051s 00:09:37.941 user 0m0.018s 00:09:37.941 sys 0m0.034s 00:09:37.942 14:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.942 ************************************ 00:09:37.942 END TEST accel_wrong_workload 00:09:37.942 ************************************ 00:09:37.942 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:37.942 14:11:29 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:37.942 14:11:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:37.942 14:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.942 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:09:37.942 ************************************ 00:09:37.942 START TEST accel_negative_buffers 00:09:37.942 ************************************ 00:09:37.942 14:11:29 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:37.942 14:11:29 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.942 14:11:29 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:37.942 14:11:29 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:37.942 14:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.942 14:11:29 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:37.942 14:11:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.942 14:11:29 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:09:37.942 14:11:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:37.942 14:11:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:37.942 14:11:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.942 14:11:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.942 14:11:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.942 14:11:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.942 14:11:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.942 14:11:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.942 14:11:29 -- accel/accel.sh@42 -- # jq -r . 00:09:37.942 -x option must be non-negative. 00:09:37.942 [2024-11-18 14:11:30.012608] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:38.201 accel_perf options: 00:09:38.201 [-h help message] 00:09:38.201 [-q queue depth per core] 00:09:38.201 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:38.201 [-T number of threads per core 00:09:38.201 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:38.201 [-t time in seconds] 00:09:38.201 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:38.201 [ dif_verify, , dif_generate, dif_generate_copy 00:09:38.201 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:38.201 [-l for compress/decompress workloads, name of uncompressed input file 00:09:38.201 [-S for crc32c workload, use this seed value (default 0) 00:09:38.201 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:38.201 [-f for fill workload, use this BYTE value (default 255) 00:09:38.201 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:38.201 [-y verify result if this switch is on] 00:09:38.201 [-a tasks to allocate per core (default: same value as -q)] 00:09:38.201 Can be used to spread operations across a wider range of memory. 00:09:38.201 14:11:30 -- common/autotest_common.sh@653 -- # es=1 00:09:38.201 14:11:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:38.201 14:11:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:38.201 14:11:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:38.201 00:09:38.201 real 0m0.053s 00:09:38.201 user 0m0.020s 00:09:38.201 sys 0m0.033s 00:09:38.201 14:11:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.201 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:38.201 ************************************ 00:09:38.201 END TEST accel_negative_buffers 00:09:38.201 ************************************ 00:09:38.201 14:11:30 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:38.201 14:11:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:38.201 14:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.201 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:09:38.201 ************************************ 00:09:38.201 START TEST accel_crc32c 00:09:38.201 ************************************ 00:09:38.201 14:11:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:38.201 14:11:30 -- accel/accel.sh@16 -- # local accel_opc 00:09:38.201 14:11:30 -- accel/accel.sh@17 -- # local accel_module 00:09:38.201 14:11:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:38.201 14:11:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:38.201 14:11:30 -- accel/accel.sh@12 -- # build_accel_config 00:09:38.201 14:11:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:38.201 14:11:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:38.201 14:11:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:38.201 14:11:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:38.201 14:11:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:38.201 14:11:30 -- accel/accel.sh@41 -- # local IFS=, 00:09:38.201 14:11:30 -- accel/accel.sh@42 -- # jq -r . 00:09:38.201 [2024-11-18 14:11:30.117198] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:38.201 [2024-11-18 14:11:30.117452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117454 ] 00:09:38.202 [2024-11-18 14:11:30.266789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.460 [2024-11-18 14:11:30.336859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.836 14:11:31 -- accel/accel.sh@18 -- # out=' 00:09:39.836 SPDK Configuration: 00:09:39.836 Core mask: 0x1 00:09:39.836 00:09:39.836 Accel Perf Configuration: 00:09:39.836 Workload Type: crc32c 00:09:39.836 CRC-32C seed: 32 00:09:39.836 Transfer size: 4096 bytes 00:09:39.836 Vector count 1 00:09:39.836 Module: software 00:09:39.836 Queue depth: 32 00:09:39.836 Allocate depth: 32 00:09:39.836 # threads/core: 1 00:09:39.836 Run time: 1 seconds 00:09:39.836 Verify: Yes 00:09:39.836 00:09:39.836 Running for 1 seconds... 00:09:39.836 00:09:39.836 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:39.836 ------------------------------------------------------------------------------------ 00:09:39.836 0,0 520320/s 2032 MiB/s 0 0 00:09:39.836 ==================================================================================== 00:09:39.836 Total 520320/s 2032 MiB/s 0 0' 00:09:39.836 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:39.836 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:39.836 14:11:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:39.836 14:11:31 -- accel/accel.sh@12 -- # build_accel_config 00:09:39.836 14:11:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:39.836 14:11:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:39.836 14:11:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.836 14:11:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.836 14:11:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:39.836 14:11:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:39.836 14:11:31 -- accel/accel.sh@41 -- # local IFS=, 00:09:39.836 14:11:31 -- accel/accel.sh@42 -- # jq -r . 00:09:39.836 [2024-11-18 14:11:31.661682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:39.836 [2024-11-18 14:11:31.661931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117477 ] 00:09:39.836 [2024-11-18 14:11:31.807398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.836 [2024-11-18 14:11:31.895980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=0x1 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=crc32c 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=32 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=software 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@23 -- # accel_module=software 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=32 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=32 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=1 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val=Yes 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:40.094 14:11:31 -- accel/accel.sh@21 -- # val= 00:09:40.094 14:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # IFS=: 00:09:40.094 14:11:31 -- accel/accel.sh@20 -- # read -r var val 00:09:41.470 14:11:33 -- accel/accel.sh@21 -- # val= 00:09:41.470 14:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.470 14:11:33 -- accel/accel.sh@20 -- # IFS=: 00:09:41.470 14:11:33 -- accel/accel.sh@20 -- # read -r var val 00:09:41.470 14:11:33 -- accel/accel.sh@21 -- # val= 00:09:41.470 14:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.470 14:11:33 -- accel/accel.sh@20 -- # IFS=: 00:09:41.470 14:11:33 -- accel/accel.sh@20 -- # read -r var val 00:09:41.470 14:11:33 -- accel/accel.sh@21 -- # val= 00:09:41.470 14:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # IFS=: 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # read -r var val 00:09:41.471 14:11:33 -- accel/accel.sh@21 -- # val= 00:09:41.471 14:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # IFS=: 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # read -r var val 00:09:41.471 14:11:33 -- accel/accel.sh@21 -- # val= 00:09:41.471 14:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # IFS=: 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # read -r var val 00:09:41.471 14:11:33 -- accel/accel.sh@21 -- # val= 00:09:41.471 14:11:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # IFS=: 00:09:41.471 14:11:33 -- accel/accel.sh@20 -- # read -r var val 00:09:41.471 14:11:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:41.471 14:11:33 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:41.471 14:11:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:41.471 00:09:41.471 real 0m3.158s 00:09:41.471 user 0m2.629s 00:09:41.471 sys 0m0.369s 00:09:41.471 14:11:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.471 14:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:41.471 ************************************ 00:09:41.471 END TEST accel_crc32c 00:09:41.471 ************************************ 00:09:41.471 14:11:33 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:41.471 14:11:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:41.471 14:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.471 14:11:33 -- common/autotest_common.sh@10 -- # set +x 00:09:41.471 ************************************ 00:09:41.471 START TEST accel_crc32c_C2 00:09:41.471 ************************************ 00:09:41.471 14:11:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:41.471 14:11:33 -- accel/accel.sh@16 -- # local accel_opc 00:09:41.471 14:11:33 -- accel/accel.sh@17 -- # local accel_module 00:09:41.471 14:11:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:41.471 14:11:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:41.471 14:11:33 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.471 14:11:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.471 14:11:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.471 14:11:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.471 14:11:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.471 14:11:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.471 14:11:33 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.471 14:11:33 -- accel/accel.sh@42 -- # jq -r . 00:09:41.471 [2024-11-18 14:11:33.322469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:41.471 [2024-11-18 14:11:33.322713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117522 ] 00:09:41.471 [2024-11-18 14:11:33.466515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.471 [2024-11-18 14:11:33.529457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.847 14:11:34 -- accel/accel.sh@18 -- # out=' 00:09:42.847 SPDK Configuration: 00:09:42.847 Core mask: 0x1 00:09:42.847 00:09:42.847 Accel Perf Configuration: 00:09:42.847 Workload Type: crc32c 00:09:42.847 CRC-32C seed: 0 00:09:42.847 Transfer size: 4096 bytes 00:09:42.847 Vector count 2 00:09:42.847 Module: software 00:09:42.847 Queue depth: 32 00:09:42.847 Allocate depth: 32 00:09:42.847 # threads/core: 1 00:09:42.847 Run time: 1 seconds 00:09:42.847 Verify: Yes 00:09:42.847 00:09:42.847 Running for 1 seconds... 00:09:42.847 00:09:42.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:42.847 ------------------------------------------------------------------------------------ 00:09:42.847 0,0 403744/s 3154 MiB/s 0 0 00:09:42.847 ==================================================================================== 00:09:42.847 Total 403744/s 1577 MiB/s 0 0' 00:09:42.847 14:11:34 -- accel/accel.sh@20 -- # IFS=: 00:09:42.847 14:11:34 -- accel/accel.sh@20 -- # read -r var val 00:09:42.847 14:11:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:42.847 14:11:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:42.847 14:11:34 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.847 14:11:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.847 14:11:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.847 14:11:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.847 14:11:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.847 14:11:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.847 14:11:34 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.847 14:11:34 -- accel/accel.sh@42 -- # jq -r . 00:09:42.847 [2024-11-18 14:11:34.890083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:42.848 [2024-11-18 14:11:34.890383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117545 ] 00:09:43.106 [2024-11-18 14:11:35.037949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.106 [2024-11-18 14:11:35.134146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val=0x1 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val=crc32c 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val=0 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.365 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.365 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.365 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val=software 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@23 -- # accel_module=software 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val=32 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val=32 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val=1 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val=Yes 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:43.366 14:11:35 -- accel/accel.sh@21 -- # val= 00:09:43.366 14:11:35 -- accel/accel.sh@22 -- # case "$var" in 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # IFS=: 00:09:43.366 14:11:35 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@21 -- # val= 00:09:44.743 14:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # IFS=: 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@21 -- # val= 00:09:44.743 14:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # IFS=: 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@21 -- # val= 00:09:44.743 14:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # IFS=: 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@21 -- # val= 00:09:44.743 14:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # IFS=: 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@21 -- # val= 00:09:44.743 14:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # IFS=: 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@21 -- # val= 00:09:44.743 14:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # IFS=: 00:09:44.743 14:11:36 -- accel/accel.sh@20 -- # read -r var val 00:09:44.743 14:11:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:44.743 14:11:36 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:44.743 14:11:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.743 00:09:44.743 real 0m3.183s 00:09:44.743 user 0m2.715s 00:09:44.743 sys 0m0.297s 00:09:44.743 14:11:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.743 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:09:44.743 ************************************ 00:09:44.743 END TEST accel_crc32c_C2 00:09:44.743 ************************************ 00:09:44.743 14:11:36 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:44.743 14:11:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:44.743 14:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.743 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:09:44.743 ************************************ 00:09:44.743 START TEST accel_copy 00:09:44.743 ************************************ 00:09:44.743 14:11:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:09:44.743 14:11:36 -- accel/accel.sh@16 -- # local accel_opc 00:09:44.743 14:11:36 -- accel/accel.sh@17 -- # local accel_module 00:09:44.743 14:11:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:44.743 14:11:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:44.743 14:11:36 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.743 14:11:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.743 14:11:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.743 14:11:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.743 14:11:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.743 14:11:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.743 14:11:36 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.743 14:11:36 -- accel/accel.sh@42 -- # jq -r . 00:09:44.743 [2024-11-18 14:11:36.559862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:44.743 [2024-11-18 14:11:36.560097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117592 ] 00:09:44.743 [2024-11-18 14:11:36.706152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.743 [2024-11-18 14:11:36.772274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.119 14:11:38 -- accel/accel.sh@18 -- # out=' 00:09:46.119 SPDK Configuration: 00:09:46.119 Core mask: 0x1 00:09:46.119 00:09:46.119 Accel Perf Configuration: 00:09:46.119 Workload Type: copy 00:09:46.119 Transfer size: 4096 bytes 00:09:46.119 Vector count 1 00:09:46.119 Module: software 00:09:46.119 Queue depth: 32 00:09:46.119 Allocate depth: 32 00:09:46.119 # threads/core: 1 00:09:46.119 Run time: 1 seconds 00:09:46.119 Verify: Yes 00:09:46.119 00:09:46.119 Running for 1 seconds... 00:09:46.119 00:09:46.119 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:46.119 ------------------------------------------------------------------------------------ 00:09:46.119 0,0 316640/s 1236 MiB/s 0 0 00:09:46.119 ==================================================================================== 00:09:46.119 Total 316640/s 1236 MiB/s 0 0' 00:09:46.119 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.120 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.120 14:11:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:46.120 14:11:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:46.120 14:11:38 -- accel/accel.sh@12 -- # build_accel_config 00:09:46.120 14:11:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:46.120 14:11:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.120 14:11:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.120 14:11:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:46.120 14:11:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:46.120 14:11:38 -- accel/accel.sh@41 -- # local IFS=, 00:09:46.120 14:11:38 -- accel/accel.sh@42 -- # jq -r . 00:09:46.120 [2024-11-18 14:11:38.140071] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:46.120 [2024-11-18 14:11:38.141155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117615 ] 00:09:46.378 [2024-11-18 14:11:38.302003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.378 [2024-11-18 14:11:38.380059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.637 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.637 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.637 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.637 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.637 14:11:38 -- accel/accel.sh@21 -- # val=0x1 00:09:46.637 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.637 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.637 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.637 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.637 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.637 14:11:38 -- accel/accel.sh@21 -- # val=copy 00:09:46.637 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.637 14:11:38 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.637 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val=software 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@23 -- # accel_module=software 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val=32 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val=32 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val=1 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val=Yes 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:46.638 14:11:38 -- accel/accel.sh@21 -- # val= 00:09:46.638 14:11:38 -- accel/accel.sh@22 -- # case "$var" in 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # IFS=: 00:09:46.638 14:11:38 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@21 -- # val= 00:09:48.016 14:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # IFS=: 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@21 -- # val= 00:09:48.016 14:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # IFS=: 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@21 -- # val= 00:09:48.016 14:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # IFS=: 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@21 -- # val= 00:09:48.016 14:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # IFS=: 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@21 -- # val= 00:09:48.016 14:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # IFS=: 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@21 -- # val= 00:09:48.016 14:11:39 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # IFS=: 00:09:48.016 14:11:39 -- accel/accel.sh@20 -- # read -r var val 00:09:48.016 14:11:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:48.016 14:11:39 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:48.016 14:11:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:48.016 00:09:48.016 real 0m3.173s 00:09:48.016 user 0m2.668s 00:09:48.016 sys 0m0.367s 00:09:48.016 14:11:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.016 14:11:39 -- common/autotest_common.sh@10 -- # set +x 00:09:48.016 ************************************ 00:09:48.016 END TEST accel_copy 00:09:48.016 ************************************ 00:09:48.016 14:11:39 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:48.016 14:11:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:48.016 14:11:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.016 14:11:39 -- common/autotest_common.sh@10 -- # set +x 00:09:48.016 ************************************ 00:09:48.016 START TEST accel_fill 00:09:48.016 ************************************ 00:09:48.016 14:11:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:48.016 14:11:39 -- accel/accel.sh@16 -- # local accel_opc 00:09:48.016 14:11:39 -- accel/accel.sh@17 -- # local accel_module 00:09:48.016 14:11:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:48.016 14:11:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:48.016 14:11:39 -- accel/accel.sh@12 -- # build_accel_config 00:09:48.016 14:11:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:48.016 14:11:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:48.016 14:11:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:48.016 14:11:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:48.016 14:11:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:48.016 14:11:39 -- accel/accel.sh@41 -- # local IFS=, 00:09:48.016 14:11:39 -- accel/accel.sh@42 -- # jq -r . 00:09:48.016 [2024-11-18 14:11:39.781916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:48.016 [2024-11-18 14:11:39.782301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117660 ] 00:09:48.016 [2024-11-18 14:11:39.927472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.016 [2024-11-18 14:11:39.992650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.395 14:11:41 -- accel/accel.sh@18 -- # out=' 00:09:49.395 SPDK Configuration: 00:09:49.395 Core mask: 0x1 00:09:49.395 00:09:49.395 Accel Perf Configuration: 00:09:49.395 Workload Type: fill 00:09:49.395 Fill pattern: 0x80 00:09:49.395 Transfer size: 4096 bytes 00:09:49.395 Vector count 1 00:09:49.395 Module: software 00:09:49.395 Queue depth: 64 00:09:49.395 Allocate depth: 64 00:09:49.395 # threads/core: 1 00:09:49.395 Run time: 1 seconds 00:09:49.395 Verify: Yes 00:09:49.395 00:09:49.395 Running for 1 seconds... 00:09:49.395 00:09:49.395 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:49.395 ------------------------------------------------------------------------------------ 00:09:49.395 0,0 477952/s 1867 MiB/s 0 0 00:09:49.395 ==================================================================================== 00:09:49.395 Total 477952/s 1867 MiB/s 0 0' 00:09:49.395 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.395 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.395 14:11:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.395 14:11:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:49.395 14:11:41 -- accel/accel.sh@12 -- # build_accel_config 00:09:49.395 14:11:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.395 14:11:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.395 14:11:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.395 14:11:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.395 14:11:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.395 14:11:41 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.395 14:11:41 -- accel/accel.sh@42 -- # jq -r . 00:09:49.395 [2024-11-18 14:11:41.363952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:49.395 [2024-11-18 14:11:41.365283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117690 ] 00:09:49.655 [2024-11-18 14:11:41.526544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.655 [2024-11-18 14:11:41.604030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=0x1 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=fill 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@24 -- # accel_opc=fill 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=0x80 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=software 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@23 -- # accel_module=software 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=64 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=64 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=1 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val=Yes 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 14:11:41 -- accel/accel.sh@21 -- # val= 00:09:49.655 14:11:41 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 14:11:41 -- accel/accel.sh@20 -- # read -r var val 00:09:51.052 14:11:42 -- accel/accel.sh@21 -- # val= 00:09:51.052 14:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.052 14:11:42 -- accel/accel.sh@20 -- # IFS=: 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # read -r var val 00:09:51.053 14:11:42 -- accel/accel.sh@21 -- # val= 00:09:51.053 14:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # IFS=: 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # read -r var val 00:09:51.053 14:11:42 -- accel/accel.sh@21 -- # val= 00:09:51.053 14:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # IFS=: 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # read -r var val 00:09:51.053 14:11:42 -- accel/accel.sh@21 -- # val= 00:09:51.053 14:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # IFS=: 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # read -r var val 00:09:51.053 14:11:42 -- accel/accel.sh@21 -- # val= 00:09:51.053 14:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # IFS=: 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # read -r var val 00:09:51.053 14:11:42 -- accel/accel.sh@21 -- # val= 00:09:51.053 14:11:42 -- accel/accel.sh@22 -- # case "$var" in 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # IFS=: 00:09:51.053 14:11:42 -- accel/accel.sh@20 -- # read -r var val 00:09:51.053 14:11:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:51.053 14:11:42 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:09:51.053 14:11:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:51.053 00:09:51.053 real 0m3.201s 00:09:51.053 user 0m2.730s 00:09:51.053 sys 0m0.329s 00:09:51.053 14:11:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:51.053 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.053 ************************************ 00:09:51.053 END TEST accel_fill 00:09:51.053 ************************************ 00:09:51.053 14:11:42 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:51.053 14:11:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:51.053 14:11:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.053 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:09:51.053 ************************************ 00:09:51.053 START TEST accel_copy_crc32c 00:09:51.053 ************************************ 00:09:51.053 14:11:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:09:51.053 14:11:43 -- accel/accel.sh@16 -- # local accel_opc 00:09:51.053 14:11:43 -- accel/accel.sh@17 -- # local accel_module 00:09:51.053 14:11:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:51.053 14:11:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:51.053 14:11:43 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.053 14:11:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:51.053 14:11:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.053 14:11:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.053 14:11:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:51.053 14:11:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:51.053 14:11:43 -- accel/accel.sh@41 -- # local IFS=, 00:09:51.053 14:11:43 -- accel/accel.sh@42 -- # jq -r . 00:09:51.053 [2024-11-18 14:11:43.033208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:51.053 [2024-11-18 14:11:43.033617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117728 ] 00:09:51.335 [2024-11-18 14:11:43.178607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.335 [2024-11-18 14:11:43.243599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.713 14:11:44 -- accel/accel.sh@18 -- # out=' 00:09:52.713 SPDK Configuration: 00:09:52.713 Core mask: 0x1 00:09:52.713 00:09:52.713 Accel Perf Configuration: 00:09:52.713 Workload Type: copy_crc32c 00:09:52.713 CRC-32C seed: 0 00:09:52.713 Vector size: 4096 bytes 00:09:52.713 Transfer size: 4096 bytes 00:09:52.713 Vector count 1 00:09:52.713 Module: software 00:09:52.713 Queue depth: 32 00:09:52.713 Allocate depth: 32 00:09:52.713 # threads/core: 1 00:09:52.713 Run time: 1 seconds 00:09:52.713 Verify: Yes 00:09:52.713 00:09:52.713 Running for 1 seconds... 00:09:52.713 00:09:52.713 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:52.713 ------------------------------------------------------------------------------------ 00:09:52.713 0,0 257664/s 1006 MiB/s 0 0 00:09:52.713 ==================================================================================== 00:09:52.713 Total 257664/s 1006 MiB/s 0 0' 00:09:52.713 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.713 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.713 14:11:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:52.713 14:11:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:52.713 14:11:44 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.713 14:11:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.713 14:11:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.713 14:11:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.713 14:11:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.713 14:11:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.713 14:11:44 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.713 14:11:44 -- accel/accel.sh@42 -- # jq -r . 00:09:52.713 [2024-11-18 14:11:44.609722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:52.713 [2024-11-18 14:11:44.609990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117758 ] 00:09:52.713 [2024-11-18 14:11:44.754754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.972 [2024-11-18 14:11:44.832227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.972 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.972 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.972 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.972 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.972 14:11:44 -- accel/accel.sh@21 -- # val=0x1 00:09:52.972 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.972 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.972 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.972 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=0 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=software 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@23 -- # accel_module=software 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=32 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=32 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=1 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val=Yes 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:52.973 14:11:44 -- accel/accel.sh@21 -- # val= 00:09:52.973 14:11:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # IFS=: 00:09:52.973 14:11:44 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@21 -- # val= 00:09:54.351 14:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # IFS=: 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@21 -- # val= 00:09:54.351 14:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # IFS=: 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@21 -- # val= 00:09:54.351 14:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # IFS=: 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@21 -- # val= 00:09:54.351 14:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # IFS=: 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@21 -- # val= 00:09:54.351 14:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # IFS=: 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@21 -- # val= 00:09:54.351 14:11:46 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # IFS=: 00:09:54.351 14:11:46 -- accel/accel.sh@20 -- # read -r var val 00:09:54.351 14:11:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:54.351 14:11:46 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:54.351 14:11:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.351 00:09:54.351 real 0m3.189s 00:09:54.351 user 0m2.652s 00:09:54.351 sys 0m0.361s 00:09:54.351 14:11:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.351 ************************************ 00:09:54.351 END TEST accel_copy_crc32c 00:09:54.351 ************************************ 00:09:54.351 14:11:46 -- common/autotest_common.sh@10 -- # set +x 00:09:54.351 14:11:46 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:54.351 14:11:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:54.351 14:11:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.351 14:11:46 -- common/autotest_common.sh@10 -- # set +x 00:09:54.351 ************************************ 00:09:54.351 START TEST accel_copy_crc32c_C2 00:09:54.351 ************************************ 00:09:54.351 14:11:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:54.351 14:11:46 -- accel/accel.sh@16 -- # local accel_opc 00:09:54.351 14:11:46 -- accel/accel.sh@17 -- # local accel_module 00:09:54.351 14:11:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:54.351 14:11:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:54.351 14:11:46 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.351 14:11:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.351 14:11:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.351 14:11:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.351 14:11:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.351 14:11:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.351 14:11:46 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.351 14:11:46 -- accel/accel.sh@42 -- # jq -r . 00:09:54.351 [2024-11-18 14:11:46.276988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:54.351 [2024-11-18 14:11:46.277234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117798 ] 00:09:54.351 [2024-11-18 14:11:46.422746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.610 [2024-11-18 14:11:46.489055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.989 14:11:47 -- accel/accel.sh@18 -- # out=' 00:09:55.989 SPDK Configuration: 00:09:55.989 Core mask: 0x1 00:09:55.989 00:09:55.989 Accel Perf Configuration: 00:09:55.989 Workload Type: copy_crc32c 00:09:55.989 CRC-32C seed: 0 00:09:55.989 Vector size: 4096 bytes 00:09:55.989 Transfer size: 8192 bytes 00:09:55.989 Vector count 2 00:09:55.989 Module: software 00:09:55.989 Queue depth: 32 00:09:55.989 Allocate depth: 32 00:09:55.989 # threads/core: 1 00:09:55.989 Run time: 1 seconds 00:09:55.989 Verify: Yes 00:09:55.989 00:09:55.989 Running for 1 seconds... 00:09:55.989 00:09:55.989 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:55.989 ------------------------------------------------------------------------------------ 00:09:55.989 0,0 181824/s 1420 MiB/s 0 0 00:09:55.989 ==================================================================================== 00:09:55.989 Total 181824/s 710 MiB/s 0 0' 00:09:55.989 14:11:47 -- accel/accel.sh@20 -- # IFS=: 00:09:55.989 14:11:47 -- accel/accel.sh@20 -- # read -r var val 00:09:55.989 14:11:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:55.989 14:11:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:55.989 14:11:47 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.989 14:11:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.989 14:11:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.989 14:11:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.989 14:11:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.989 14:11:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.989 14:11:47 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.989 14:11:47 -- accel/accel.sh@42 -- # jq -r . 00:09:55.989 [2024-11-18 14:11:47.814499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:55.989 [2024-11-18 14:11:47.814794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117828 ] 00:09:55.989 [2024-11-18 14:11:47.959487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.989 [2024-11-18 14:11:48.031991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=0x1 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=0 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val='8192 bytes' 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=software 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@23 -- # accel_module=software 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=32 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=32 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=1 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val=Yes 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:56.249 14:11:48 -- accel/accel.sh@21 -- # val= 00:09:56.249 14:11:48 -- accel/accel.sh@22 -- # case "$var" in 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # IFS=: 00:09:56.249 14:11:48 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 14:11:49 -- accel/accel.sh@21 -- # val= 00:09:57.627 14:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # IFS=: 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 14:11:49 -- accel/accel.sh@21 -- # val= 00:09:57.627 14:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # IFS=: 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 14:11:49 -- accel/accel.sh@21 -- # val= 00:09:57.627 14:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # IFS=: 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 14:11:49 -- accel/accel.sh@21 -- # val= 00:09:57.627 14:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # IFS=: 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 14:11:49 -- accel/accel.sh@21 -- # val= 00:09:57.627 14:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # IFS=: 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 14:11:49 -- accel/accel.sh@21 -- # val= 00:09:57.627 14:11:49 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # IFS=: 00:09:57.627 14:11:49 -- accel/accel.sh@20 -- # read -r var val 00:09:57.627 ************************************ 00:09:57.627 END TEST accel_copy_crc32c_C2 00:09:57.627 ************************************ 00:09:57.627 14:11:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:57.627 14:11:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:57.627 14:11:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:57.627 00:09:57.627 real 0m3.136s 00:09:57.627 user 0m2.657s 00:09:57.627 sys 0m0.305s 00:09:57.627 14:11:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:57.627 14:11:49 -- common/autotest_common.sh@10 -- # set +x 00:09:57.627 14:11:49 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:57.627 14:11:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:57.627 14:11:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:57.627 14:11:49 -- common/autotest_common.sh@10 -- # set +x 00:09:57.627 ************************************ 00:09:57.627 START TEST accel_dualcast 00:09:57.627 ************************************ 00:09:57.627 14:11:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:09:57.627 14:11:49 -- accel/accel.sh@16 -- # local accel_opc 00:09:57.627 14:11:49 -- accel/accel.sh@17 -- # local accel_module 00:09:57.627 14:11:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:09:57.627 14:11:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:57.627 14:11:49 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.627 14:11:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.627 14:11:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.627 14:11:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.627 14:11:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.627 14:11:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.627 14:11:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.627 14:11:49 -- accel/accel.sh@42 -- # jq -r . 00:09:57.627 [2024-11-18 14:11:49.463858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:57.627 [2024-11-18 14:11:49.464080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117874 ] 00:09:57.627 [2024-11-18 14:11:49.610593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.627 [2024-11-18 14:11:49.675894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.004 14:11:50 -- accel/accel.sh@18 -- # out=' 00:09:59.004 SPDK Configuration: 00:09:59.004 Core mask: 0x1 00:09:59.004 00:09:59.004 Accel Perf Configuration: 00:09:59.004 Workload Type: dualcast 00:09:59.004 Transfer size: 4096 bytes 00:09:59.004 Vector count 1 00:09:59.004 Module: software 00:09:59.004 Queue depth: 32 00:09:59.004 Allocate depth: 32 00:09:59.004 # threads/core: 1 00:09:59.004 Run time: 1 seconds 00:09:59.004 Verify: Yes 00:09:59.004 00:09:59.004 Running for 1 seconds... 00:09:59.004 00:09:59.004 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:59.004 ------------------------------------------------------------------------------------ 00:09:59.004 0,0 331936/s 1296 MiB/s 0 0 00:09:59.004 ==================================================================================== 00:09:59.004 Total 331936/s 1296 MiB/s 0 0' 00:09:59.004 14:11:50 -- accel/accel.sh@20 -- # IFS=: 00:09:59.004 14:11:50 -- accel/accel.sh@20 -- # read -r var val 00:09:59.004 14:11:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:59.004 14:11:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:59.004 14:11:50 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.004 14:11:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.004 14:11:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.004 14:11:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.004 14:11:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.004 14:11:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.004 14:11:51 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.004 14:11:51 -- accel/accel.sh@42 -- # jq -r . 00:09:59.004 [2024-11-18 14:11:51.026862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:59.004 [2024-11-18 14:11:51.027101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117896 ] 00:09:59.263 [2024-11-18 14:11:51.171730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.263 [2024-11-18 14:11:51.248326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=0x1 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=dualcast 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=software 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@23 -- # accel_module=software 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=32 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=32 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=1 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val=Yes 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:09:59.523 14:11:51 -- accel/accel.sh@21 -- # val= 00:09:59.523 14:11:51 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # IFS=: 00:09:59.523 14:11:51 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@21 -- # val= 00:10:00.901 14:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # IFS=: 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@21 -- # val= 00:10:00.901 14:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # IFS=: 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@21 -- # val= 00:10:00.901 14:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # IFS=: 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@21 -- # val= 00:10:00.901 14:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # IFS=: 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@21 -- # val= 00:10:00.901 14:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # IFS=: 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@21 -- # val= 00:10:00.901 14:11:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # IFS=: 00:10:00.901 14:11:52 -- accel/accel.sh@20 -- # read -r var val 00:10:00.901 14:11:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:00.901 14:11:52 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:00.901 14:11:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:00.901 00:10:00.901 real 0m3.171s 00:10:00.901 user 0m2.673s 00:10:00.901 sys 0m0.322s 00:10:00.901 14:11:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:00.901 ************************************ 00:10:00.901 END TEST accel_dualcast 00:10:00.901 ************************************ 00:10:00.901 14:11:52 -- common/autotest_common.sh@10 -- # set +x 00:10:00.901 14:11:52 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:00.901 14:11:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:00.901 14:11:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.901 14:11:52 -- common/autotest_common.sh@10 -- # set +x 00:10:00.901 ************************************ 00:10:00.901 START TEST accel_compare 00:10:00.901 ************************************ 00:10:00.901 14:11:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:00.901 14:11:52 -- accel/accel.sh@16 -- # local accel_opc 00:10:00.901 14:11:52 -- accel/accel.sh@17 -- # local accel_module 00:10:00.901 14:11:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:00.901 14:11:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:00.901 14:11:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:00.901 14:11:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:00.901 14:11:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:00.901 14:11:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:00.901 14:11:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:00.901 14:11:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:00.901 14:11:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:00.901 14:11:52 -- accel/accel.sh@42 -- # jq -r . 00:10:00.901 [2024-11-18 14:11:52.688986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:00.901 [2024-11-18 14:11:52.689236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117936 ] 00:10:00.901 [2024-11-18 14:11:52.838002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.901 [2024-11-18 14:11:52.905515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.275 14:11:54 -- accel/accel.sh@18 -- # out=' 00:10:02.275 SPDK Configuration: 00:10:02.275 Core mask: 0x1 00:10:02.275 00:10:02.275 Accel Perf Configuration: 00:10:02.275 Workload Type: compare 00:10:02.275 Transfer size: 4096 bytes 00:10:02.275 Vector count 1 00:10:02.275 Module: software 00:10:02.275 Queue depth: 32 00:10:02.275 Allocate depth: 32 00:10:02.275 # threads/core: 1 00:10:02.275 Run time: 1 seconds 00:10:02.275 Verify: Yes 00:10:02.275 00:10:02.275 Running for 1 seconds... 00:10:02.275 00:10:02.275 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:02.275 ------------------------------------------------------------------------------------ 00:10:02.275 0,0 477856/s 1866 MiB/s 0 0 00:10:02.275 ==================================================================================== 00:10:02.275 Total 477856/s 1866 MiB/s 0 0' 00:10:02.275 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.275 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.275 14:11:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:02.275 14:11:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:02.275 14:11:54 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.275 14:11:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.275 14:11:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.275 14:11:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.275 14:11:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.275 14:11:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.275 14:11:54 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.275 14:11:54 -- accel/accel.sh@42 -- # jq -r . 00:10:02.275 [2024-11-18 14:11:54.264860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:02.275 [2024-11-18 14:11:54.265107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117964 ] 00:10:02.534 [2024-11-18 14:11:54.410387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.534 [2024-11-18 14:11:54.488820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=0x1 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=compare 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=software 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=32 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=32 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=1 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val=Yes 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:02.534 14:11:54 -- accel/accel.sh@21 -- # val= 00:10:02.534 14:11:54 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # IFS=: 00:10:02.534 14:11:54 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@21 -- # val= 00:10:03.911 14:11:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # IFS=: 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@21 -- # val= 00:10:03.911 14:11:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # IFS=: 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@21 -- # val= 00:10:03.911 14:11:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # IFS=: 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@21 -- # val= 00:10:03.911 14:11:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # IFS=: 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@21 -- # val= 00:10:03.911 14:11:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # IFS=: 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@21 -- # val= 00:10:03.911 14:11:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # IFS=: 00:10:03.911 14:11:55 -- accel/accel.sh@20 -- # read -r var val 00:10:03.911 14:11:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:03.911 14:11:55 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:03.911 14:11:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.911 00:10:03.911 real 0m3.152s 00:10:03.911 user 0m2.663s 00:10:03.911 sys 0m0.317s 00:10:03.911 ************************************ 00:10:03.911 END TEST accel_compare 00:10:03.911 ************************************ 00:10:03.911 14:11:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:03.911 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:10:03.911 14:11:55 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:03.911 14:11:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:03.911 14:11:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.911 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:10:03.911 ************************************ 00:10:03.911 START TEST accel_xor 00:10:03.911 ************************************ 00:10:03.911 14:11:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:03.911 14:11:55 -- accel/accel.sh@16 -- # local accel_opc 00:10:03.911 14:11:55 -- accel/accel.sh@17 -- # local accel_module 00:10:03.911 14:11:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:03.911 14:11:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:03.911 14:11:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.911 14:11:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.911 14:11:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.911 14:11:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.911 14:11:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.911 14:11:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.911 14:11:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.911 14:11:55 -- accel/accel.sh@42 -- # jq -r . 00:10:03.911 [2024-11-18 14:11:55.891275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:03.911 [2024-11-18 14:11:55.891529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118004 ] 00:10:04.170 [2024-11-18 14:11:56.038540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.170 [2024-11-18 14:11:56.104370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.545 14:11:57 -- accel/accel.sh@18 -- # out=' 00:10:05.545 SPDK Configuration: 00:10:05.545 Core mask: 0x1 00:10:05.545 00:10:05.545 Accel Perf Configuration: 00:10:05.545 Workload Type: xor 00:10:05.545 Source buffers: 2 00:10:05.545 Transfer size: 4096 bytes 00:10:05.545 Vector count 1 00:10:05.545 Module: software 00:10:05.545 Queue depth: 32 00:10:05.545 Allocate depth: 32 00:10:05.546 # threads/core: 1 00:10:05.546 Run time: 1 seconds 00:10:05.546 Verify: Yes 00:10:05.546 00:10:05.546 Running for 1 seconds... 00:10:05.546 00:10:05.546 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:05.546 ------------------------------------------------------------------------------------ 00:10:05.546 0,0 228256/s 891 MiB/s 0 0 00:10:05.546 ==================================================================================== 00:10:05.546 Total 228256/s 891 MiB/s 0 0' 00:10:05.546 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.546 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.546 14:11:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:05.546 14:11:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:05.546 14:11:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.546 14:11:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.546 14:11:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.546 14:11:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.546 14:11:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.546 14:11:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.546 14:11:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.546 14:11:57 -- accel/accel.sh@42 -- # jq -r . 00:10:05.546 [2024-11-18 14:11:57.428571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:05.546 [2024-11-18 14:11:57.428808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118034 ] 00:10:05.546 [2024-11-18 14:11:57.574993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.804 [2024-11-18 14:11:57.652275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val=0x1 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val=xor 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val=2 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:05.804 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.804 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.804 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val=software 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@23 -- # accel_module=software 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val=32 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val=32 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val=1 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val=Yes 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:05.805 14:11:57 -- accel/accel.sh@21 -- # val= 00:10:05.805 14:11:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # IFS=: 00:10:05.805 14:11:57 -- accel/accel.sh@20 -- # read -r var val 00:10:07.182 14:11:58 -- accel/accel.sh@21 -- # val= 00:10:07.182 14:11:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.182 14:11:58 -- accel/accel.sh@20 -- # IFS=: 00:10:07.182 14:11:58 -- accel/accel.sh@20 -- # read -r var val 00:10:07.182 14:11:58 -- accel/accel.sh@21 -- # val= 00:10:07.182 14:11:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.182 14:11:58 -- accel/accel.sh@20 -- # IFS=: 00:10:07.182 14:11:58 -- accel/accel.sh@20 -- # read -r var val 00:10:07.182 14:11:58 -- accel/accel.sh@21 -- # val= 00:10:07.182 14:11:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # IFS=: 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # read -r var val 00:10:07.183 14:11:58 -- accel/accel.sh@21 -- # val= 00:10:07.183 14:11:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # IFS=: 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # read -r var val 00:10:07.183 14:11:58 -- accel/accel.sh@21 -- # val= 00:10:07.183 14:11:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # IFS=: 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # read -r var val 00:10:07.183 14:11:58 -- accel/accel.sh@21 -- # val= 00:10:07.183 14:11:58 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # IFS=: 00:10:07.183 14:11:58 -- accel/accel.sh@20 -- # read -r var val 00:10:07.183 14:11:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:07.183 14:11:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:07.183 ************************************ 00:10:07.183 END TEST accel_xor 00:10:07.183 14:11:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.183 00:10:07.183 real 0m3.144s 00:10:07.183 user 0m2.617s 00:10:07.183 sys 0m0.343s 00:10:07.183 14:11:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:07.183 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.183 ************************************ 00:10:07.183 14:11:59 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:07.183 14:11:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:07.183 14:11:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.183 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.183 ************************************ 00:10:07.183 START TEST accel_xor 00:10:07.183 ************************************ 00:10:07.183 14:11:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:10:07.183 14:11:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:07.183 14:11:59 -- accel/accel.sh@17 -- # local accel_module 00:10:07.183 14:11:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:07.183 14:11:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:07.183 14:11:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.183 14:11:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:07.183 14:11:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.183 14:11:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.183 14:11:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:07.183 14:11:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:07.183 14:11:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:07.183 14:11:59 -- accel/accel.sh@42 -- # jq -r . 00:10:07.183 [2024-11-18 14:11:59.093971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:07.183 [2024-11-18 14:11:59.094208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118074 ] 00:10:07.183 [2024-11-18 14:11:59.240087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.441 [2024-11-18 14:11:59.311947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.842 14:12:00 -- accel/accel.sh@18 -- # out=' 00:10:08.842 SPDK Configuration: 00:10:08.842 Core mask: 0x1 00:10:08.842 00:10:08.842 Accel Perf Configuration: 00:10:08.842 Workload Type: xor 00:10:08.842 Source buffers: 3 00:10:08.842 Transfer size: 4096 bytes 00:10:08.842 Vector count 1 00:10:08.842 Module: software 00:10:08.842 Queue depth: 32 00:10:08.842 Allocate depth: 32 00:10:08.842 # threads/core: 1 00:10:08.842 Run time: 1 seconds 00:10:08.842 Verify: Yes 00:10:08.843 00:10:08.843 Running for 1 seconds... 00:10:08.843 00:10:08.843 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:08.843 ------------------------------------------------------------------------------------ 00:10:08.843 0,0 214816/s 839 MiB/s 0 0 00:10:08.843 ==================================================================================== 00:10:08.843 Total 214816/s 839 MiB/s 0 0' 00:10:08.843 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:08.843 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:08.843 14:12:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:08.843 14:12:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:08.843 14:12:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.843 14:12:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.843 14:12:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.843 14:12:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.843 14:12:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.843 14:12:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.843 14:12:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.843 14:12:00 -- accel/accel.sh@42 -- # jq -r . 00:10:08.843 [2024-11-18 14:12:00.653386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:08.843 [2024-11-18 14:12:00.653797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118107 ] 00:10:08.843 [2024-11-18 14:12:00.799833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.843 [2024-11-18 14:12:00.874744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=0x1 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=xor 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=3 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=software 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@23 -- # accel_module=software 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=32 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=32 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=1 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val=Yes 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:09.101 14:12:00 -- accel/accel.sh@21 -- # val= 00:10:09.101 14:12:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # IFS=: 00:10:09.101 14:12:00 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@21 -- # val= 00:10:10.475 14:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # IFS=: 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@21 -- # val= 00:10:10.475 14:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # IFS=: 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@21 -- # val= 00:10:10.475 14:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # IFS=: 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@21 -- # val= 00:10:10.475 14:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # IFS=: 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@21 -- # val= 00:10:10.475 14:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # IFS=: 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@21 -- # val= 00:10:10.475 14:12:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # IFS=: 00:10:10.475 14:12:02 -- accel/accel.sh@20 -- # read -r var val 00:10:10.475 14:12:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:10.475 14:12:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:10.475 14:12:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.475 00:10:10.475 real 0m3.169s 00:10:10.475 user 0m2.659s 00:10:10.475 sys 0m0.335s 00:10:10.475 14:12:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:10.475 ************************************ 00:10:10.475 END TEST accel_xor 00:10:10.475 ************************************ 00:10:10.475 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:10:10.475 14:12:02 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:10.475 14:12:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:10.475 14:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.475 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:10:10.475 ************************************ 00:10:10.475 START TEST accel_dif_verify 00:10:10.475 ************************************ 00:10:10.475 14:12:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:10:10.475 14:12:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:10.475 14:12:02 -- accel/accel.sh@17 -- # local accel_module 00:10:10.475 14:12:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:10.475 14:12:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:10.475 14:12:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.475 14:12:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:10.475 14:12:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.475 14:12:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.475 14:12:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:10.475 14:12:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:10.475 14:12:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:10.475 14:12:02 -- accel/accel.sh@42 -- # jq -r . 00:10:10.475 [2024-11-18 14:12:02.319454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:10.475 [2024-11-18 14:12:02.320188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118140 ] 00:10:10.475 [2024-11-18 14:12:02.467538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.475 [2024-11-18 14:12:02.540558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.853 14:12:03 -- accel/accel.sh@18 -- # out=' 00:10:11.853 SPDK Configuration: 00:10:11.853 Core mask: 0x1 00:10:11.853 00:10:11.853 Accel Perf Configuration: 00:10:11.853 Workload Type: dif_verify 00:10:11.853 Vector size: 4096 bytes 00:10:11.853 Transfer size: 4096 bytes 00:10:11.853 Block size: 512 bytes 00:10:11.853 Metadata size: 8 bytes 00:10:11.853 Vector count 1 00:10:11.853 Module: software 00:10:11.853 Queue depth: 32 00:10:11.853 Allocate depth: 32 00:10:11.853 # threads/core: 1 00:10:11.853 Run time: 1 seconds 00:10:11.853 Verify: No 00:10:11.853 00:10:11.853 Running for 1 seconds... 00:10:11.853 00:10:11.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:11.853 ------------------------------------------------------------------------------------ 00:10:11.853 0,0 117024/s 464 MiB/s 0 0 00:10:11.853 ==================================================================================== 00:10:11.853 Total 117024/s 457 MiB/s 0 0' 00:10:11.853 14:12:03 -- accel/accel.sh@20 -- # IFS=: 00:10:11.853 14:12:03 -- accel/accel.sh@20 -- # read -r var val 00:10:11.853 14:12:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:11.853 14:12:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:11.853 14:12:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.853 14:12:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.853 14:12:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.853 14:12:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.853 14:12:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.853 14:12:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.853 14:12:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.853 14:12:03 -- accel/accel.sh@42 -- # jq -r . 00:10:11.853 [2024-11-18 14:12:03.874316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:11.853 [2024-11-18 14:12:03.875125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118175 ] 00:10:12.112 [2024-11-18 14:12:04.020096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.112 [2024-11-18 14:12:04.093840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.112 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.112 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.112 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.112 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.112 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.112 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.112 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.112 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.112 14:12:04 -- accel/accel.sh@21 -- # val=0x1 00:10:12.112 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.112 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val=dif_verify 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val=software 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@23 -- # accel_module=software 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val=32 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val=32 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val=1 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.371 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.371 14:12:04 -- accel/accel.sh@21 -- # val=No 00:10:12.371 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.372 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.372 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.372 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.372 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.372 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.372 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:12.372 14:12:04 -- accel/accel.sh@21 -- # val= 00:10:12.372 14:12:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.372 14:12:04 -- accel/accel.sh@20 -- # IFS=: 00:10:12.372 14:12:04 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@21 -- # val= 00:10:13.749 14:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # IFS=: 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@21 -- # val= 00:10:13.749 14:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # IFS=: 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@21 -- # val= 00:10:13.749 14:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # IFS=: 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@21 -- # val= 00:10:13.749 14:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # IFS=: 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@21 -- # val= 00:10:13.749 14:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # IFS=: 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@21 -- # val= 00:10:13.749 14:12:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # IFS=: 00:10:13.749 14:12:05 -- accel/accel.sh@20 -- # read -r var val 00:10:13.749 14:12:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:13.749 14:12:05 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:13.749 14:12:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.749 00:10:13.749 real 0m3.152s 00:10:13.749 user 0m2.644s 00:10:13.749 sys 0m0.337s 00:10:13.749 14:12:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:13.749 14:12:05 -- common/autotest_common.sh@10 -- # set +x 00:10:13.749 ************************************ 00:10:13.749 END TEST accel_dif_verify 00:10:13.749 ************************************ 00:10:13.749 14:12:05 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:13.749 14:12:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:13.749 14:12:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.749 14:12:05 -- common/autotest_common.sh@10 -- # set +x 00:10:13.749 ************************************ 00:10:13.749 START TEST accel_dif_generate 00:10:13.749 ************************************ 00:10:13.749 14:12:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:10:13.749 14:12:05 -- accel/accel.sh@16 -- # local accel_opc 00:10:13.749 14:12:05 -- accel/accel.sh@17 -- # local accel_module 00:10:13.749 14:12:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:13.749 14:12:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:13.749 14:12:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.749 14:12:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:13.749 14:12:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.749 14:12:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.749 14:12:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:13.749 14:12:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:13.749 14:12:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:13.749 14:12:05 -- accel/accel.sh@42 -- # jq -r . 00:10:13.749 [2024-11-18 14:12:05.525745] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:13.749 [2024-11-18 14:12:05.526431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118208 ] 00:10:13.749 [2024-11-18 14:12:05.672005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.749 [2024-11-18 14:12:05.740122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.125 14:12:07 -- accel/accel.sh@18 -- # out=' 00:10:15.125 SPDK Configuration: 00:10:15.125 Core mask: 0x1 00:10:15.125 00:10:15.125 Accel Perf Configuration: 00:10:15.125 Workload Type: dif_generate 00:10:15.125 Vector size: 4096 bytes 00:10:15.125 Transfer size: 4096 bytes 00:10:15.125 Block size: 512 bytes 00:10:15.125 Metadata size: 8 bytes 00:10:15.125 Vector count 1 00:10:15.125 Module: software 00:10:15.125 Queue depth: 32 00:10:15.125 Allocate depth: 32 00:10:15.125 # threads/core: 1 00:10:15.125 Run time: 1 seconds 00:10:15.125 Verify: No 00:10:15.125 00:10:15.125 Running for 1 seconds... 00:10:15.125 00:10:15.125 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:15.125 ------------------------------------------------------------------------------------ 00:10:15.125 0,0 141344/s 560 MiB/s 0 0 00:10:15.125 ==================================================================================== 00:10:15.125 Total 141344/s 552 MiB/s 0 0' 00:10:15.125 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.125 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.125 14:12:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:15.125 14:12:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:15.125 14:12:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.125 14:12:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:15.126 14:12:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.126 14:12:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.126 14:12:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:15.126 14:12:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:15.126 14:12:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:15.126 14:12:07 -- accel/accel.sh@42 -- # jq -r . 00:10:15.126 [2024-11-18 14:12:07.078357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:15.126 [2024-11-18 14:12:07.078606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118245 ] 00:10:15.385 [2024-11-18 14:12:07.223867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.385 [2024-11-18 14:12:07.297663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=0x1 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=dif_generate 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=software 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@23 -- # accel_module=software 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=32 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=32 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=1 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val=No 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:15.385 14:12:07 -- accel/accel.sh@21 -- # val= 00:10:15.385 14:12:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # IFS=: 00:10:15.385 14:12:07 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@21 -- # val= 00:10:16.765 14:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # IFS=: 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@21 -- # val= 00:10:16.765 14:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # IFS=: 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@21 -- # val= 00:10:16.765 14:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # IFS=: 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@21 -- # val= 00:10:16.765 14:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # IFS=: 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@21 -- # val= 00:10:16.765 14:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # IFS=: 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@21 -- # val= 00:10:16.765 14:12:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # IFS=: 00:10:16.765 14:12:08 -- accel/accel.sh@20 -- # read -r var val 00:10:16.765 14:12:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:16.765 14:12:08 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:16.765 14:12:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.765 00:10:16.765 real 0m3.142s 00:10:16.765 user 0m2.613s 00:10:16.765 sys 0m0.355s 00:10:16.765 14:12:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:16.765 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:10:16.765 ************************************ 00:10:16.765 END TEST accel_dif_generate 00:10:16.765 ************************************ 00:10:16.765 14:12:08 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:16.765 14:12:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:16.765 14:12:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.765 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:10:16.765 ************************************ 00:10:16.765 START TEST accel_dif_generate_copy 00:10:16.765 ************************************ 00:10:16.765 14:12:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:10:16.765 14:12:08 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.766 14:12:08 -- accel/accel.sh@17 -- # local accel_module 00:10:16.766 14:12:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:16.766 14:12:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:16.766 14:12:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.766 14:12:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.766 14:12:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.766 14:12:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.766 14:12:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.766 14:12:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.766 14:12:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.766 14:12:08 -- accel/accel.sh@42 -- # jq -r . 00:10:16.766 [2024-11-18 14:12:08.727217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:16.766 [2024-11-18 14:12:08.727465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118278 ] 00:10:17.025 [2024-11-18 14:12:08.871438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.025 [2024-11-18 14:12:08.933818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.400 14:12:10 -- accel/accel.sh@18 -- # out=' 00:10:18.400 SPDK Configuration: 00:10:18.400 Core mask: 0x1 00:10:18.400 00:10:18.400 Accel Perf Configuration: 00:10:18.400 Workload Type: dif_generate_copy 00:10:18.400 Vector size: 4096 bytes 00:10:18.400 Transfer size: 4096 bytes 00:10:18.400 Vector count 1 00:10:18.400 Module: software 00:10:18.400 Queue depth: 32 00:10:18.400 Allocate depth: 32 00:10:18.400 # threads/core: 1 00:10:18.400 Run time: 1 seconds 00:10:18.400 Verify: No 00:10:18.400 00:10:18.400 Running for 1 seconds... 00:10:18.401 00:10:18.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:18.401 ------------------------------------------------------------------------------------ 00:10:18.401 0,0 110272/s 437 MiB/s 0 0 00:10:18.401 ==================================================================================== 00:10:18.401 Total 110272/s 430 MiB/s 0 0' 00:10:18.401 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.401 14:12:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:18.401 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.401 14:12:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:18.401 14:12:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.401 14:12:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.401 14:12:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.401 14:12:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.401 14:12:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.401 14:12:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.401 14:12:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.401 14:12:10 -- accel/accel.sh@42 -- # jq -r . 00:10:18.401 [2024-11-18 14:12:10.251704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:18.401 [2024-11-18 14:12:10.251889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118313 ] 00:10:18.401 [2024-11-18 14:12:10.387230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.659 [2024-11-18 14:12:10.475341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.659 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.659 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.659 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.659 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.659 14:12:10 -- accel/accel.sh@21 -- # val=0x1 00:10:18.659 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.659 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.659 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.659 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.659 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.659 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.659 14:12:10 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:18.659 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val=software 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@23 -- # accel_module=software 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val=32 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val=32 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val=1 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val=No 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:18.660 14:12:10 -- accel/accel.sh@21 -- # val= 00:10:18.660 14:12:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # IFS=: 00:10:18.660 14:12:10 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@21 -- # val= 00:10:20.036 14:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # IFS=: 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@21 -- # val= 00:10:20.036 14:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # IFS=: 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@21 -- # val= 00:10:20.036 14:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # IFS=: 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@21 -- # val= 00:10:20.036 14:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # IFS=: 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@21 -- # val= 00:10:20.036 14:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # IFS=: 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@21 -- # val= 00:10:20.036 14:12:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # IFS=: 00:10:20.036 14:12:11 -- accel/accel.sh@20 -- # read -r var val 00:10:20.036 14:12:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:20.036 14:12:11 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:20.036 14:12:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.036 00:10:20.036 real 0m3.091s 00:10:20.036 user 0m2.610s 00:10:20.036 sys 0m0.320s 00:10:20.036 14:12:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:20.036 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.036 ************************************ 00:10:20.036 END TEST accel_dif_generate_copy 00:10:20.036 ************************************ 00:10:20.036 14:12:11 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:20.036 14:12:11 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:20.036 14:12:11 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:20.036 14:12:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:20.036 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:10:20.036 ************************************ 00:10:20.036 START TEST accel_comp 00:10:20.036 ************************************ 00:10:20.036 14:12:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:20.036 14:12:11 -- accel/accel.sh@16 -- # local accel_opc 00:10:20.036 14:12:11 -- accel/accel.sh@17 -- # local accel_module 00:10:20.036 14:12:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:20.036 14:12:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:20.036 14:12:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:20.036 14:12:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:20.036 14:12:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.036 14:12:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.036 14:12:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:20.036 14:12:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:20.036 14:12:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:20.036 14:12:11 -- accel/accel.sh@42 -- # jq -r . 00:10:20.036 [2024-11-18 14:12:11.873561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:20.036 [2024-11-18 14:12:11.873801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118353 ] 00:10:20.036 [2024-11-18 14:12:12.019120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.036 [2024-11-18 14:12:12.099322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.411 14:12:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:21.411 00:10:21.411 SPDK Configuration: 00:10:21.411 Core mask: 0x1 00:10:21.411 00:10:21.411 Accel Perf Configuration: 00:10:21.411 Workload Type: compress 00:10:21.411 Transfer size: 4096 bytes 00:10:21.411 Vector count 1 00:10:21.411 Module: software 00:10:21.411 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:21.411 Queue depth: 32 00:10:21.411 Allocate depth: 32 00:10:21.411 # threads/core: 1 00:10:21.411 Run time: 1 seconds 00:10:21.411 Verify: No 00:10:21.411 00:10:21.411 Running for 1 seconds... 00:10:21.411 00:10:21.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.411 ------------------------------------------------------------------------------------ 00:10:21.411 0,0 59904/s 249 MiB/s 0 0 00:10:21.411 ==================================================================================== 00:10:21.411 Total 59904/s 234 MiB/s 0 0' 00:10:21.411 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.411 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.411 14:12:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:21.411 14:12:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.412 14:12:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:21.412 14:12:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.412 14:12:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.412 14:12:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.412 14:12:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.412 14:12:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.412 14:12:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.412 14:12:13 -- accel/accel.sh@42 -- # jq -r . 00:10:21.412 [2024-11-18 14:12:13.459207] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:21.412 [2024-11-18 14:12:13.459934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118380 ] 00:10:21.670 [2024-11-18 14:12:13.604163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.670 [2024-11-18 14:12:13.698785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.928 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.928 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.928 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.928 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.928 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.928 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.928 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.928 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=0x1 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=compress 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=software 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@23 -- # accel_module=software 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=32 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=32 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=1 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val=No 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:21.929 14:12:13 -- accel/accel.sh@21 -- # val= 00:10:21.929 14:12:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # IFS=: 00:10:21.929 14:12:13 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@21 -- # val= 00:10:23.304 14:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # IFS=: 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@21 -- # val= 00:10:23.304 14:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # IFS=: 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@21 -- # val= 00:10:23.304 14:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # IFS=: 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@21 -- # val= 00:10:23.304 14:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # IFS=: 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@21 -- # val= 00:10:23.304 14:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # IFS=: 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@21 -- # val= 00:10:23.304 14:12:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # IFS=: 00:10:23.304 14:12:15 -- accel/accel.sh@20 -- # read -r var val 00:10:23.304 14:12:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:23.304 14:12:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:23.304 14:12:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.304 00:10:23.304 real 0m3.192s 00:10:23.304 user 0m2.649s 00:10:23.304 sys 0m0.376s 00:10:23.304 14:12:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.304 ************************************ 00:10:23.304 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:10:23.304 END TEST accel_comp 00:10:23.304 ************************************ 00:10:23.304 14:12:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:23.304 14:12:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:23.304 14:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.304 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:10:23.304 ************************************ 00:10:23.304 START TEST accel_decomp 00:10:23.304 ************************************ 00:10:23.304 14:12:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:23.304 14:12:15 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.304 14:12:15 -- accel/accel.sh@17 -- # local accel_module 00:10:23.304 14:12:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:23.304 14:12:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:23.304 14:12:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.304 14:12:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.304 14:12:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.304 14:12:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.304 14:12:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.304 14:12:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.304 14:12:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.304 14:12:15 -- accel/accel.sh@42 -- # jq -r . 00:10:23.304 [2024-11-18 14:12:15.122617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:23.304 [2024-11-18 14:12:15.122876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118421 ] 00:10:23.304 [2024-11-18 14:12:15.270468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.304 [2024-11-18 14:12:15.349181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.679 14:12:16 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:24.679 00:10:24.679 SPDK Configuration: 00:10:24.679 Core mask: 0x1 00:10:24.679 00:10:24.679 Accel Perf Configuration: 00:10:24.679 Workload Type: decompress 00:10:24.679 Transfer size: 4096 bytes 00:10:24.679 Vector count 1 00:10:24.679 Module: software 00:10:24.679 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:24.679 Queue depth: 32 00:10:24.679 Allocate depth: 32 00:10:24.679 # threads/core: 1 00:10:24.679 Run time: 1 seconds 00:10:24.679 Verify: Yes 00:10:24.679 00:10:24.679 Running for 1 seconds... 00:10:24.679 00:10:24.679 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:24.679 ------------------------------------------------------------------------------------ 00:10:24.679 0,0 75072/s 138 MiB/s 0 0 00:10:24.679 ==================================================================================== 00:10:24.679 Total 75072/s 293 MiB/s 0 0' 00:10:24.679 14:12:16 -- accel/accel.sh@20 -- # IFS=: 00:10:24.679 14:12:16 -- accel/accel.sh@20 -- # read -r var val 00:10:24.679 14:12:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:24.679 14:12:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:24.679 14:12:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.679 14:12:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.679 14:12:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.679 14:12:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.679 14:12:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.679 14:12:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.679 14:12:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.679 14:12:16 -- accel/accel.sh@42 -- # jq -r . 00:10:24.679 [2024-11-18 14:12:16.709480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:24.679 [2024-11-18 14:12:16.710113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118450 ] 00:10:24.938 [2024-11-18 14:12:16.855676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.938 [2024-11-18 14:12:16.944983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.196 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.196 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.196 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.196 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.196 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.196 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.196 14:12:17 -- accel/accel.sh@21 -- # val=0x1 00:10:25.196 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.196 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.196 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.196 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.196 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.196 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=decompress 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=software 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@23 -- # accel_module=software 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=32 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=32 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=1 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val=Yes 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:25.197 14:12:17 -- accel/accel.sh@21 -- # val= 00:10:25.197 14:12:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # IFS=: 00:10:25.197 14:12:17 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@21 -- # val= 00:10:26.573 14:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # IFS=: 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@21 -- # val= 00:10:26.573 14:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # IFS=: 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@21 -- # val= 00:10:26.573 14:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # IFS=: 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@21 -- # val= 00:10:26.573 14:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # IFS=: 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@21 -- # val= 00:10:26.573 14:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # IFS=: 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@21 -- # val= 00:10:26.573 14:12:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # IFS=: 00:10:26.573 14:12:18 -- accel/accel.sh@20 -- # read -r var val 00:10:26.573 14:12:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:26.573 14:12:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:26.573 14:12:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:26.573 00:10:26.573 real 0m3.205s 00:10:26.573 user 0m2.652s 00:10:26.573 sys 0m0.374s 00:10:26.573 14:12:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:26.573 ************************************ 00:10:26.573 END TEST accel_decomp 00:10:26.573 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:10:26.573 ************************************ 00:10:26.573 14:12:18 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:26.573 14:12:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:26.573 14:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.573 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:10:26.573 ************************************ 00:10:26.573 START TEST accel_decmop_full 00:10:26.573 ************************************ 00:10:26.573 14:12:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:26.573 14:12:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:26.573 14:12:18 -- accel/accel.sh@17 -- # local accel_module 00:10:26.573 14:12:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:26.573 14:12:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:26.573 14:12:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.573 14:12:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.573 14:12:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.573 14:12:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.573 14:12:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.573 14:12:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.573 14:12:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.573 14:12:18 -- accel/accel.sh@42 -- # jq -r . 00:10:26.573 [2024-11-18 14:12:18.380404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:26.574 [2024-11-18 14:12:18.380641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118491 ] 00:10:26.574 [2024-11-18 14:12:18.526930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.574 [2024-11-18 14:12:18.605785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.950 14:12:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:27.950 00:10:27.950 SPDK Configuration: 00:10:27.950 Core mask: 0x1 00:10:27.950 00:10:27.950 Accel Perf Configuration: 00:10:27.950 Workload Type: decompress 00:10:27.950 Transfer size: 111250 bytes 00:10:27.950 Vector count 1 00:10:27.950 Module: software 00:10:27.950 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:27.950 Queue depth: 32 00:10:27.950 Allocate depth: 32 00:10:27.950 # threads/core: 1 00:10:27.950 Run time: 1 seconds 00:10:27.950 Verify: Yes 00:10:27.950 00:10:27.950 Running for 1 seconds... 00:10:27.950 00:10:27.950 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:27.950 ------------------------------------------------------------------------------------ 00:10:27.950 0,0 5632/s 232 MiB/s 0 0 00:10:27.950 ==================================================================================== 00:10:27.950 Total 5632/s 597 MiB/s 0 0' 00:10:27.950 14:12:19 -- accel/accel.sh@20 -- # IFS=: 00:10:27.950 14:12:19 -- accel/accel.sh@20 -- # read -r var val 00:10:27.950 14:12:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:27.950 14:12:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:27.950 14:12:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:27.951 14:12:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:27.951 14:12:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.951 14:12:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.951 14:12:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:27.951 14:12:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:27.951 14:12:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:27.951 14:12:19 -- accel/accel.sh@42 -- # jq -r . 00:10:27.951 [2024-11-18 14:12:19.934166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:27.951 [2024-11-18 14:12:19.934348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118518 ] 00:10:28.210 [2024-11-18 14:12:20.073370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.210 [2024-11-18 14:12:20.158367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val=0x1 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val=decompress 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val=software 00:10:28.210 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.210 14:12:20 -- accel/accel.sh@23 -- # accel_module=software 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.210 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.210 14:12:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val=32 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val=32 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val=1 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val=Yes 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:28.211 14:12:20 -- accel/accel.sh@21 -- # val= 00:10:28.211 14:12:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # IFS=: 00:10:28.211 14:12:20 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@21 -- # val= 00:10:29.603 14:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # IFS=: 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@21 -- # val= 00:10:29.603 14:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # IFS=: 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@21 -- # val= 00:10:29.603 14:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # IFS=: 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@21 -- # val= 00:10:29.603 14:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # IFS=: 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@21 -- # val= 00:10:29.603 14:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # IFS=: 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@21 -- # val= 00:10:29.603 14:12:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # IFS=: 00:10:29.603 14:12:21 -- accel/accel.sh@20 -- # read -r var val 00:10:29.603 14:12:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:29.603 14:12:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:29.603 14:12:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.603 00:10:29.603 real 0m3.076s 00:10:29.603 user 0m2.605s 00:10:29.603 sys 0m0.315s 00:10:29.603 14:12:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:29.603 14:12:21 -- common/autotest_common.sh@10 -- # set +x 00:10:29.603 ************************************ 00:10:29.603 END TEST accel_decmop_full 00:10:29.603 ************************************ 00:10:29.603 14:12:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:29.603 14:12:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:29.603 14:12:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:29.603 14:12:21 -- common/autotest_common.sh@10 -- # set +x 00:10:29.603 ************************************ 00:10:29.603 START TEST accel_decomp_mcore 00:10:29.603 ************************************ 00:10:29.603 14:12:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:29.603 14:12:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.603 14:12:21 -- accel/accel.sh@17 -- # local accel_module 00:10:29.603 14:12:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:29.603 14:12:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:29.603 14:12:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.603 14:12:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.603 14:12:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.603 14:12:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.603 14:12:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.603 14:12:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.603 14:12:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.603 14:12:21 -- accel/accel.sh@42 -- # jq -r . 00:10:29.603 [2024-11-18 14:12:21.508950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.603 [2024-11-18 14:12:21.509201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118559 ] 00:10:29.881 [2024-11-18 14:12:21.675667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.881 [2024-11-18 14:12:21.749858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.881 [2024-11-18 14:12:21.750005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.881 [2024-11-18 14:12:21.750108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.881 [2024-11-18 14:12:21.750109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.259 14:12:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:31.259 00:10:31.259 SPDK Configuration: 00:10:31.259 Core mask: 0xf 00:10:31.259 00:10:31.259 Accel Perf Configuration: 00:10:31.259 Workload Type: decompress 00:10:31.259 Transfer size: 4096 bytes 00:10:31.259 Vector count 1 00:10:31.259 Module: software 00:10:31.259 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.259 Queue depth: 32 00:10:31.259 Allocate depth: 32 00:10:31.259 # threads/core: 1 00:10:31.259 Run time: 1 seconds 00:10:31.259 Verify: Yes 00:10:31.259 00:10:31.259 Running for 1 seconds... 00:10:31.259 00:10:31.259 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:31.259 ------------------------------------------------------------------------------------ 00:10:31.259 0,0 52352/s 96 MiB/s 0 0 00:10:31.259 3,0 50240/s 92 MiB/s 0 0 00:10:31.259 2,0 51200/s 94 MiB/s 0 0 00:10:31.259 1,0 51648/s 95 MiB/s 0 0 00:10:31.259 ==================================================================================== 00:10:31.259 Total 205440/s 802 MiB/s 0 0' 00:10:31.259 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.259 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.259 14:12:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:31.259 14:12:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.259 14:12:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:31.259 14:12:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.259 14:12:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.259 14:12:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.259 14:12:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.259 14:12:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.259 14:12:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.259 14:12:23 -- accel/accel.sh@42 -- # jq -r . 00:10:31.259 [2024-11-18 14:12:23.080779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:31.259 [2024-11-18 14:12:23.081024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118592 ] 00:10:31.259 [2024-11-18 14:12:23.236298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.259 [2024-11-18 14:12:23.322164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.259 [2024-11-18 14:12:23.322312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.259 [2024-11-18 14:12:23.322469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.259 [2024-11-18 14:12:23.322469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=0xf 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=decompress 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=software 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@23 -- # accel_module=software 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=32 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=32 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=1 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val=Yes 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:31.519 14:12:23 -- accel/accel.sh@21 -- # val= 00:10:31.519 14:12:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # IFS=: 00:10:31.519 14:12:23 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@21 -- # val= 00:10:32.897 14:12:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # IFS=: 00:10:32.897 14:12:24 -- accel/accel.sh@20 -- # read -r var val 00:10:32.897 14:12:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.897 14:12:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:32.897 14:12:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.897 00:10:32.897 real 0m3.182s 00:10:32.897 user 0m9.885s 00:10:32.897 sys 0m0.363s 00:10:32.897 14:12:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.897 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:10:32.897 ************************************ 00:10:32.897 END TEST accel_decomp_mcore 00:10:32.897 ************************************ 00:10:32.897 14:12:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:32.897 14:12:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:32.897 14:12:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.897 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:10:32.897 ************************************ 00:10:32.897 START TEST accel_decomp_full_mcore 00:10:32.897 ************************************ 00:10:32.897 14:12:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:32.897 14:12:24 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.897 14:12:24 -- accel/accel.sh@17 -- # local accel_module 00:10:32.897 14:12:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:32.897 14:12:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:32.897 14:12:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.897 14:12:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.897 14:12:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.897 14:12:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.897 14:12:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.897 14:12:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.897 14:12:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.897 14:12:24 -- accel/accel.sh@42 -- # jq -r . 00:10:32.897 [2024-11-18 14:12:24.747247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:32.897 [2024-11-18 14:12:24.747501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118633 ] 00:10:32.897 [2024-11-18 14:12:24.912484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.156 [2024-11-18 14:12:24.992460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.156 [2024-11-18 14:12:24.992602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.156 [2024-11-18 14:12:24.992748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.156 [2024-11-18 14:12:24.992748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.534 14:12:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:34.534 00:10:34.534 SPDK Configuration: 00:10:34.534 Core mask: 0xf 00:10:34.534 00:10:34.534 Accel Perf Configuration: 00:10:34.534 Workload Type: decompress 00:10:34.534 Transfer size: 111250 bytes 00:10:34.534 Vector count 1 00:10:34.534 Module: software 00:10:34.534 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:34.534 Queue depth: 32 00:10:34.534 Allocate depth: 32 00:10:34.534 # threads/core: 1 00:10:34.534 Run time: 1 seconds 00:10:34.534 Verify: Yes 00:10:34.534 00:10:34.534 Running for 1 seconds... 00:10:34.534 00:10:34.534 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:34.534 ------------------------------------------------------------------------------------ 00:10:34.534 0,0 5440/s 224 MiB/s 0 0 00:10:34.534 3,0 5344/s 220 MiB/s 0 0 00:10:34.534 2,0 5312/s 219 MiB/s 0 0 00:10:34.534 1,0 5312/s 219 MiB/s 0 0 00:10:34.534 ==================================================================================== 00:10:34.534 Total 21408/s 2271 MiB/s 0 0' 00:10:34.534 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.534 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.534 14:12:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:34.534 14:12:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:34.534 14:12:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.534 14:12:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.534 14:12:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.534 14:12:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.534 14:12:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.534 14:12:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.534 14:12:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.534 14:12:26 -- accel/accel.sh@42 -- # jq -r . 00:10:34.534 [2024-11-18 14:12:26.359201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:34.534 [2024-11-18 14:12:26.359845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118666 ] 00:10:34.534 [2024-11-18 14:12:26.522959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.793 [2024-11-18 14:12:26.611286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.793 [2024-11-18 14:12:26.611438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.793 [2024-11-18 14:12:26.611479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.793 [2024-11-18 14:12:26.611479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val=0xf 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val=decompress 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.793 14:12:26 -- accel/accel.sh@21 -- # val=software 00:10:34.793 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.793 14:12:26 -- accel/accel.sh@23 -- # accel_module=software 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.793 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val=32 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val=32 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val=1 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val=Yes 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:34.794 14:12:26 -- accel/accel.sh@21 -- # val= 00:10:34.794 14:12:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # IFS=: 00:10:34.794 14:12:26 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@21 -- # val= 00:10:36.218 14:12:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # IFS=: 00:10:36.218 14:12:27 -- accel/accel.sh@20 -- # read -r var val 00:10:36.218 14:12:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:36.218 14:12:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:36.218 14:12:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:36.218 00:10:36.218 real 0m3.239s 00:10:36.218 user 0m9.951s 00:10:36.218 sys 0m0.438s 00:10:36.218 14:12:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:36.218 ************************************ 00:10:36.218 END TEST accel_decomp_full_mcore 00:10:36.218 ************************************ 00:10:36.218 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:10:36.218 14:12:27 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:36.218 14:12:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:36.218 14:12:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.218 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:10:36.218 ************************************ 00:10:36.218 START TEST accel_decomp_mthread 00:10:36.218 ************************************ 00:10:36.218 14:12:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:36.218 14:12:28 -- accel/accel.sh@16 -- # local accel_opc 00:10:36.218 14:12:28 -- accel/accel.sh@17 -- # local accel_module 00:10:36.218 14:12:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:36.218 14:12:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:36.218 14:12:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.218 14:12:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.218 14:12:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.218 14:12:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.218 14:12:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.218 14:12:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.218 14:12:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.218 14:12:28 -- accel/accel.sh@42 -- # jq -r . 00:10:36.218 [2024-11-18 14:12:28.041371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:36.218 [2024-11-18 14:12:28.041641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118709 ] 00:10:36.218 [2024-11-18 14:12:28.187252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.218 [2024-11-18 14:12:28.250085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.603 14:12:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:37.603 00:10:37.603 SPDK Configuration: 00:10:37.603 Core mask: 0x1 00:10:37.603 00:10:37.603 Accel Perf Configuration: 00:10:37.603 Workload Type: decompress 00:10:37.603 Transfer size: 4096 bytes 00:10:37.603 Vector count 1 00:10:37.603 Module: software 00:10:37.603 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:37.603 Queue depth: 32 00:10:37.603 Allocate depth: 32 00:10:37.603 # threads/core: 2 00:10:37.603 Run time: 1 seconds 00:10:37.603 Verify: Yes 00:10:37.603 00:10:37.603 Running for 1 seconds... 00:10:37.603 00:10:37.603 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.603 ------------------------------------------------------------------------------------ 00:10:37.603 0,1 38048/s 70 MiB/s 0 0 00:10:37.603 0,0 37920/s 69 MiB/s 0 0 00:10:37.603 ==================================================================================== 00:10:37.603 Total 75968/s 296 MiB/s 0 0' 00:10:37.603 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.603 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.603 14:12:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:37.603 14:12:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:37.603 14:12:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.603 14:12:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.603 14:12:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.603 14:12:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.603 14:12:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.603 14:12:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.603 14:12:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.603 14:12:29 -- accel/accel.sh@42 -- # jq -r . 00:10:37.603 [2024-11-18 14:12:29.575481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:37.603 [2024-11-18 14:12:29.575721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118739 ] 00:10:37.861 [2024-11-18 14:12:29.717838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.861 [2024-11-18 14:12:29.795407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=0x1 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=decompress 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=software 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@23 -- # accel_module=software 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=32 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=32 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=2 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val=Yes 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:37.861 14:12:29 -- accel/accel.sh@21 -- # val= 00:10:37.861 14:12:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # IFS=: 00:10:37.861 14:12:29 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@21 -- # val= 00:10:39.240 14:12:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # IFS=: 00:10:39.240 14:12:31 -- accel/accel.sh@20 -- # read -r var val 00:10:39.240 14:12:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:39.240 14:12:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:39.240 14:12:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:39.240 00:10:39.240 real 0m3.102s 00:10:39.240 user 0m2.595s 00:10:39.240 sys 0m0.354s 00:10:39.240 14:12:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.240 14:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:39.240 ************************************ 00:10:39.240 END TEST accel_decomp_mthread 00:10:39.240 ************************************ 00:10:39.240 14:12:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.240 14:12:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:39.240 14:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.240 14:12:31 -- common/autotest_common.sh@10 -- # set +x 00:10:39.240 ************************************ 00:10:39.240 START TEST accel_deomp_full_mthread 00:10:39.240 ************************************ 00:10:39.240 14:12:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.240 14:12:31 -- accel/accel.sh@16 -- # local accel_opc 00:10:39.240 14:12:31 -- accel/accel.sh@17 -- # local accel_module 00:10:39.240 14:12:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.240 14:12:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:39.240 14:12:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.240 14:12:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.240 14:12:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.240 14:12:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.240 14:12:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.240 14:12:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.240 14:12:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.240 14:12:31 -- accel/accel.sh@42 -- # jq -r . 00:10:39.240 [2024-11-18 14:12:31.201228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:39.240 [2024-11-18 14:12:31.201467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118777 ] 00:10:39.499 [2024-11-18 14:12:31.347594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.499 [2024-11-18 14:12:31.409712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.876 14:12:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:40.876 00:10:40.876 SPDK Configuration: 00:10:40.876 Core mask: 0x1 00:10:40.876 00:10:40.876 Accel Perf Configuration: 00:10:40.876 Workload Type: decompress 00:10:40.876 Transfer size: 111250 bytes 00:10:40.876 Vector count 1 00:10:40.876 Module: software 00:10:40.876 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:40.876 Queue depth: 32 00:10:40.876 Allocate depth: 32 00:10:40.876 # threads/core: 2 00:10:40.876 Run time: 1 seconds 00:10:40.876 Verify: Yes 00:10:40.876 00:10:40.876 Running for 1 seconds... 00:10:40.876 00:10:40.876 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.876 ------------------------------------------------------------------------------------ 00:10:40.876 0,1 2848/s 117 MiB/s 0 0 00:10:40.876 0,0 2816/s 116 MiB/s 0 0 00:10:40.876 ==================================================================================== 00:10:40.876 Total 5664/s 600 MiB/s 0 0' 00:10:40.876 14:12:32 -- accel/accel.sh@20 -- # IFS=: 00:10:40.876 14:12:32 -- accel/accel.sh@20 -- # read -r var val 00:10:40.876 14:12:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:40.876 14:12:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:40.876 14:12:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.876 14:12:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.876 14:12:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.876 14:12:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.876 14:12:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.876 14:12:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.876 14:12:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.876 14:12:32 -- accel/accel.sh@42 -- # jq -r . 00:10:40.877 [2024-11-18 14:12:32.757225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:40.877 [2024-11-18 14:12:32.757909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118807 ] 00:10:40.877 [2024-11-18 14:12:32.902913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.135 [2024-11-18 14:12:32.977145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=0x1 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=decompress 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=software 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@23 -- # accel_module=software 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=32 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=32 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=2 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val=Yes 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:41.135 14:12:33 -- accel/accel.sh@21 -- # val= 00:10:41.135 14:12:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # IFS=: 00:10:41.135 14:12:33 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@21 -- # val= 00:10:42.512 14:12:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # IFS=: 00:10:42.512 14:12:34 -- accel/accel.sh@20 -- # read -r var val 00:10:42.512 14:12:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.512 14:12:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:42.512 14:12:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.512 00:10:42.512 real 0m3.146s 00:10:42.512 user 0m2.665s 00:10:42.512 sys 0m0.321s 00:10:42.512 14:12:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.512 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:10:42.512 ************************************ 00:10:42.512 END TEST accel_deomp_full_mthread 00:10:42.512 ************************************ 00:10:42.512 14:12:34 -- accel/accel.sh@116 -- # [[ n == y ]] 00:10:42.512 14:12:34 -- accel/accel.sh@129 -- # build_accel_config 00:10:42.513 14:12:34 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:42.513 14:12:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.513 14:12:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.513 14:12:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:42.513 14:12:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.513 14:12:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.513 14:12:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.513 14:12:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.513 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:10:42.513 14:12:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.513 14:12:34 -- accel/accel.sh@42 -- # jq -r . 00:10:42.513 ************************************ 00:10:42.513 START TEST accel_dif_functional_tests 00:10:42.513 ************************************ 00:10:42.513 14:12:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:42.513 [2024-11-18 14:12:34.440159] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:42.513 [2024-11-18 14:12:34.440404] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118854 ] 00:10:42.772 [2024-11-18 14:12:34.597235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.772 [2024-11-18 14:12:34.663319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.772 [2024-11-18 14:12:34.663466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.772 [2024-11-18 14:12:34.663478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.772 00:10:42.772 00:10:42.772 CUnit - A unit testing framework for C - Version 2.1-3 00:10:42.772 http://cunit.sourceforge.net/ 00:10:42.772 00:10:42.772 00:10:42.772 Suite: accel_dif 00:10:42.772 Test: verify: DIF generated, GUARD check ...passed 00:10:42.772 Test: verify: DIF generated, APPTAG check ...passed 00:10:42.772 Test: verify: DIF generated, REFTAG check ...passed 00:10:42.772 Test: verify: DIF not generated, GUARD check ...[2024-11-18 14:12:34.776524] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:42.772 passed 00:10:42.772 Test: verify: DIF not generated, APPTAG check ...[2024-11-18 14:12:34.776653] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:42.772 [2024-11-18 14:12:34.776797] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:42.772 passed 00:10:42.772 Test: verify: DIF not generated, REFTAG check ...[2024-11-18 14:12:34.776885] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:42.772 [2024-11-18 14:12:34.776987] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:42.772 passed 00:10:42.772 Test: verify: APPTAG correct, APPTAG check ...[2024-11-18 14:12:34.777096] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:42.772 passed 00:10:42.772 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:10:42.772 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-11-18 14:12:34.777292] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:42.772 passed 00:10:42.772 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:42.772 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:42.772 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:10:42.772 Test: generate copy: DIF generated, GUARD check ...[2024-11-18 14:12:34.777615] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:42.772 passed 00:10:42.772 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:42.772 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:42.772 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:42.772 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:42.772 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:42.772 Test: generate copy: iovecs-len validate ...[2024-11-18 14:12:34.778238] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:42.772 passed 00:10:42.772 Test: generate copy: buffer alignment validate ...passed 00:10:42.772 00:10:42.772 Run Summary: Type Total Ran Passed Failed Inactive 00:10:42.772 suites 1 1 n/a 0 0 00:10:42.772 tests 20 20 20 0 0 00:10:42.772 asserts 204 204 204 0 n/a 00:10:42.772 00:10:42.772 Elapsed time = 0.001 seconds 00:10:43.031 00:10:43.031 real 0m0.700s 00:10:43.031 user 0m0.947s 00:10:43.031 sys 0m0.260s 00:10:43.031 14:12:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:43.031 14:12:35 -- common/autotest_common.sh@10 -- # set +x 00:10:43.031 ************************************ 00:10:43.031 END TEST accel_dif_functional_tests 00:10:43.031 ************************************ 00:10:43.290 00:10:43.290 real 1m8.401s 00:10:43.290 user 1m11.696s 00:10:43.290 sys 0m8.789s 00:10:43.290 14:12:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:43.290 14:12:35 -- common/autotest_common.sh@10 -- # set +x 00:10:43.290 ************************************ 00:10:43.290 END TEST accel 00:10:43.290 ************************************ 00:10:43.290 14:12:35 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:43.290 14:12:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:43.290 14:12:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.290 14:12:35 -- common/autotest_common.sh@10 -- # set +x 00:10:43.290 ************************************ 00:10:43.290 START TEST accel_rpc 00:10:43.290 ************************************ 00:10:43.290 14:12:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:43.290 * Looking for test storage... 00:10:43.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:43.290 14:12:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:43.290 14:12:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:43.290 14:12:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:43.290 14:12:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:43.291 14:12:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:43.291 14:12:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:43.291 14:12:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:43.291 14:12:35 -- scripts/common.sh@335 -- # IFS=.-: 00:10:43.291 14:12:35 -- scripts/common.sh@335 -- # read -ra ver1 00:10:43.291 14:12:35 -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.291 14:12:35 -- scripts/common.sh@336 -- # read -ra ver2 00:10:43.291 14:12:35 -- scripts/common.sh@337 -- # local 'op=<' 00:10:43.291 14:12:35 -- scripts/common.sh@339 -- # ver1_l=2 00:10:43.291 14:12:35 -- scripts/common.sh@340 -- # ver2_l=1 00:10:43.291 14:12:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:43.291 14:12:35 -- scripts/common.sh@343 -- # case "$op" in 00:10:43.291 14:12:35 -- scripts/common.sh@344 -- # : 1 00:10:43.291 14:12:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:43.291 14:12:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.291 14:12:35 -- scripts/common.sh@364 -- # decimal 1 00:10:43.291 14:12:35 -- scripts/common.sh@352 -- # local d=1 00:10:43.291 14:12:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.291 14:12:35 -- scripts/common.sh@354 -- # echo 1 00:10:43.291 14:12:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:43.291 14:12:35 -- scripts/common.sh@365 -- # decimal 2 00:10:43.291 14:12:35 -- scripts/common.sh@352 -- # local d=2 00:10:43.291 14:12:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.291 14:12:35 -- scripts/common.sh@354 -- # echo 2 00:10:43.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.291 14:12:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:43.291 14:12:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:43.291 14:12:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:43.291 14:12:35 -- scripts/common.sh@367 -- # return 0 00:10:43.291 14:12:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.291 14:12:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.291 --rc genhtml_branch_coverage=1 00:10:43.291 --rc genhtml_function_coverage=1 00:10:43.291 --rc genhtml_legend=1 00:10:43.291 --rc geninfo_all_blocks=1 00:10:43.291 --rc geninfo_unexecuted_blocks=1 00:10:43.291 00:10:43.291 ' 00:10:43.291 14:12:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.291 --rc genhtml_branch_coverage=1 00:10:43.291 --rc genhtml_function_coverage=1 00:10:43.291 --rc genhtml_legend=1 00:10:43.291 --rc geninfo_all_blocks=1 00:10:43.291 --rc geninfo_unexecuted_blocks=1 00:10:43.291 00:10:43.291 ' 00:10:43.291 14:12:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.291 --rc genhtml_branch_coverage=1 00:10:43.291 --rc genhtml_function_coverage=1 00:10:43.291 --rc genhtml_legend=1 00:10:43.291 --rc geninfo_all_blocks=1 00:10:43.291 --rc geninfo_unexecuted_blocks=1 00:10:43.291 00:10:43.291 ' 00:10:43.291 14:12:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:43.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.291 --rc genhtml_branch_coverage=1 00:10:43.291 --rc genhtml_function_coverage=1 00:10:43.291 --rc genhtml_legend=1 00:10:43.291 --rc geninfo_all_blocks=1 00:10:43.291 --rc geninfo_unexecuted_blocks=1 00:10:43.291 00:10:43.291 ' 00:10:43.291 14:12:35 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:43.291 14:12:35 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=118942 00:10:43.291 14:12:35 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:43.291 14:12:35 -- accel/accel_rpc.sh@15 -- # waitforlisten 118942 00:10:43.291 14:12:35 -- common/autotest_common.sh@829 -- # '[' -z 118942 ']' 00:10:43.291 14:12:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.291 14:12:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.291 14:12:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.291 14:12:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.291 14:12:35 -- common/autotest_common.sh@10 -- # set +x 00:10:43.550 [2024-11-18 14:12:35.373751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:43.550 [2024-11-18 14:12:35.373945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118942 ] 00:10:43.550 [2024-11-18 14:12:35.512201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.550 [2024-11-18 14:12:35.579053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:43.550 [2024-11-18 14:12:35.579314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.486 14:12:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.486 14:12:36 -- common/autotest_common.sh@862 -- # return 0 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:44.486 14:12:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:44.486 14:12:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.486 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.486 ************************************ 00:10:44.486 START TEST accel_assign_opcode 00:10:44.486 ************************************ 00:10:44.486 14:12:36 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:44.486 14:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.486 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.486 [2024-11-18 14:12:36.348059] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:44.486 14:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:44.486 14:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.486 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.486 [2024-11-18 14:12:36.356064] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:44.486 14:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.486 14:12:36 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:44.486 14:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.486 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 14:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.745 14:12:36 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:44.745 14:12:36 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:44.745 14:12:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.745 14:12:36 -- accel/accel_rpc.sh@42 -- # grep software 00:10:44.745 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 14:12:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.745 software 00:10:44.745 00:10:44.745 real 0m0.356s 00:10:44.745 user 0m0.056s 00:10:44.745 sys 0m0.007s 00:10:44.745 14:12:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.745 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.745 ************************************ 00:10:44.745 END TEST accel_assign_opcode 00:10:44.745 ************************************ 00:10:44.745 14:12:36 -- accel/accel_rpc.sh@55 -- # killprocess 118942 00:10:44.745 14:12:36 -- common/autotest_common.sh@936 -- # '[' -z 118942 ']' 00:10:44.745 14:12:36 -- common/autotest_common.sh@940 -- # kill -0 118942 00:10:44.745 14:12:36 -- common/autotest_common.sh@941 -- # uname 00:10:44.745 14:12:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:44.745 14:12:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118942 00:10:44.745 14:12:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:44.745 14:12:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:44.745 14:12:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118942' 00:10:44.745 killing process with pid 118942 00:10:44.745 14:12:36 -- common/autotest_common.sh@955 -- # kill 118942 00:10:44.745 14:12:36 -- common/autotest_common.sh@960 -- # wait 118942 00:10:45.312 00:10:45.312 real 0m2.131s 00:10:45.312 user 0m2.169s 00:10:45.312 sys 0m0.495s 00:10:45.312 14:12:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:45.312 ************************************ 00:10:45.312 14:12:37 -- common/autotest_common.sh@10 -- # set +x 00:10:45.312 END TEST accel_rpc 00:10:45.312 ************************************ 00:10:45.312 14:12:37 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:45.312 14:12:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:45.312 14:12:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.312 14:12:37 -- common/autotest_common.sh@10 -- # set +x 00:10:45.312 ************************************ 00:10:45.312 START TEST app_cmdline 00:10:45.312 ************************************ 00:10:45.312 14:12:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:45.571 * Looking for test storage... 00:10:45.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:45.571 14:12:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:45.571 14:12:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:45.571 14:12:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:45.571 14:12:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:45.571 14:12:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:45.571 14:12:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:45.571 14:12:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:45.571 14:12:37 -- scripts/common.sh@335 -- # IFS=.-: 00:10:45.571 14:12:37 -- scripts/common.sh@335 -- # read -ra ver1 00:10:45.571 14:12:37 -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.571 14:12:37 -- scripts/common.sh@336 -- # read -ra ver2 00:10:45.571 14:12:37 -- scripts/common.sh@337 -- # local 'op=<' 00:10:45.571 14:12:37 -- scripts/common.sh@339 -- # ver1_l=2 00:10:45.571 14:12:37 -- scripts/common.sh@340 -- # ver2_l=1 00:10:45.571 14:12:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:45.571 14:12:37 -- scripts/common.sh@343 -- # case "$op" in 00:10:45.571 14:12:37 -- scripts/common.sh@344 -- # : 1 00:10:45.571 14:12:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:45.571 14:12:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.571 14:12:37 -- scripts/common.sh@364 -- # decimal 1 00:10:45.571 14:12:37 -- scripts/common.sh@352 -- # local d=1 00:10:45.571 14:12:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.571 14:12:37 -- scripts/common.sh@354 -- # echo 1 00:10:45.571 14:12:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:45.571 14:12:37 -- scripts/common.sh@365 -- # decimal 2 00:10:45.571 14:12:37 -- scripts/common.sh@352 -- # local d=2 00:10:45.571 14:12:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.571 14:12:37 -- scripts/common.sh@354 -- # echo 2 00:10:45.571 14:12:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:45.572 14:12:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:45.572 14:12:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:45.572 14:12:37 -- scripts/common.sh@367 -- # return 0 00:10:45.572 14:12:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.572 14:12:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.572 --rc genhtml_branch_coverage=1 00:10:45.572 --rc genhtml_function_coverage=1 00:10:45.572 --rc genhtml_legend=1 00:10:45.572 --rc geninfo_all_blocks=1 00:10:45.572 --rc geninfo_unexecuted_blocks=1 00:10:45.572 00:10:45.572 ' 00:10:45.572 14:12:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.572 --rc genhtml_branch_coverage=1 00:10:45.572 --rc genhtml_function_coverage=1 00:10:45.572 --rc genhtml_legend=1 00:10:45.572 --rc geninfo_all_blocks=1 00:10:45.572 --rc geninfo_unexecuted_blocks=1 00:10:45.572 00:10:45.572 ' 00:10:45.572 14:12:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.572 --rc genhtml_branch_coverage=1 00:10:45.572 --rc genhtml_function_coverage=1 00:10:45.572 --rc genhtml_legend=1 00:10:45.572 --rc geninfo_all_blocks=1 00:10:45.572 --rc geninfo_unexecuted_blocks=1 00:10:45.572 00:10:45.572 ' 00:10:45.572 14:12:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:45.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.572 --rc genhtml_branch_coverage=1 00:10:45.572 --rc genhtml_function_coverage=1 00:10:45.572 --rc genhtml_legend=1 00:10:45.572 --rc geninfo_all_blocks=1 00:10:45.572 --rc geninfo_unexecuted_blocks=1 00:10:45.572 00:10:45.572 ' 00:10:45.572 14:12:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:45.572 14:12:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=119058 00:10:45.572 14:12:37 -- app/cmdline.sh@18 -- # waitforlisten 119058 00:10:45.572 14:12:37 -- common/autotest_common.sh@829 -- # '[' -z 119058 ']' 00:10:45.572 14:12:37 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:45.572 14:12:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.572 14:12:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.572 14:12:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.572 14:12:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.572 14:12:37 -- common/autotest_common.sh@10 -- # set +x 00:10:45.572 [2024-11-18 14:12:37.573972] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:45.572 [2024-11-18 14:12:37.574180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119058 ] 00:10:45.831 [2024-11-18 14:12:37.711211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.831 [2024-11-18 14:12:37.775354] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.831 [2024-11-18 14:12:37.775644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.767 14:12:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.767 14:12:38 -- common/autotest_common.sh@862 -- # return 0 00:10:46.767 14:12:38 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:46.767 { 00:10:46.767 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:10:46.767 "fields": { 00:10:46.767 "major": 24, 00:10:46.767 "minor": 1, 00:10:46.767 "patch": 1, 00:10:46.767 "suffix": "-pre", 00:10:46.767 "commit": "c13c99a5e" 00:10:46.767 } 00:10:46.767 } 00:10:46.767 14:12:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:46.767 14:12:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:46.767 14:12:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:46.767 14:12:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:46.767 14:12:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:46.767 14:12:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:46.767 14:12:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.767 14:12:38 -- common/autotest_common.sh@10 -- # set +x 00:10:46.767 14:12:38 -- app/cmdline.sh@26 -- # sort 00:10:46.767 14:12:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.026 14:12:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:47.026 14:12:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:47.026 14:12:38 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:47.026 14:12:38 -- common/autotest_common.sh@650 -- # local es=0 00:10:47.026 14:12:38 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:47.026 14:12:38 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.026 14:12:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.026 14:12:38 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.026 14:12:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.026 14:12:38 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.026 14:12:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.026 14:12:38 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.026 14:12:38 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:47.026 14:12:38 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:47.026 request: 00:10:47.026 { 00:10:47.026 "method": "env_dpdk_get_mem_stats", 00:10:47.026 "req_id": 1 00:10:47.026 } 00:10:47.026 Got JSON-RPC error response 00:10:47.026 response: 00:10:47.026 { 00:10:47.026 "code": -32601, 00:10:47.026 "message": "Method not found" 00:10:47.026 } 00:10:47.026 14:12:39 -- common/autotest_common.sh@653 -- # es=1 00:10:47.026 14:12:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:47.026 14:12:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:47.026 14:12:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:47.026 14:12:39 -- app/cmdline.sh@1 -- # killprocess 119058 00:10:47.026 14:12:39 -- common/autotest_common.sh@936 -- # '[' -z 119058 ']' 00:10:47.026 14:12:39 -- common/autotest_common.sh@940 -- # kill -0 119058 00:10:47.026 14:12:39 -- common/autotest_common.sh@941 -- # uname 00:10:47.295 14:12:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.295 14:12:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119058 00:10:47.295 14:12:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:47.295 14:12:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:47.295 killing process with pid 119058 00:10:47.295 14:12:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119058' 00:10:47.295 14:12:39 -- common/autotest_common.sh@955 -- # kill 119058 00:10:47.295 14:12:39 -- common/autotest_common.sh@960 -- # wait 119058 00:10:47.863 00:10:47.863 real 0m2.315s 00:10:47.863 user 0m2.704s 00:10:47.863 sys 0m0.595s 00:10:47.863 14:12:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.863 14:12:39 -- common/autotest_common.sh@10 -- # set +x 00:10:47.863 ************************************ 00:10:47.863 END TEST app_cmdline 00:10:47.863 ************************************ 00:10:47.863 14:12:39 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:47.863 14:12:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.863 14:12:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.863 14:12:39 -- common/autotest_common.sh@10 -- # set +x 00:10:47.863 ************************************ 00:10:47.863 START TEST version 00:10:47.863 ************************************ 00:10:47.863 14:12:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:47.863 * Looking for test storage... 00:10:47.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:47.863 14:12:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:47.863 14:12:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:47.863 14:12:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:47.863 14:12:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:47.863 14:12:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:47.863 14:12:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:47.863 14:12:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:47.863 14:12:39 -- scripts/common.sh@335 -- # IFS=.-: 00:10:47.863 14:12:39 -- scripts/common.sh@335 -- # read -ra ver1 00:10:47.863 14:12:39 -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.863 14:12:39 -- scripts/common.sh@336 -- # read -ra ver2 00:10:47.863 14:12:39 -- scripts/common.sh@337 -- # local 'op=<' 00:10:47.863 14:12:39 -- scripts/common.sh@339 -- # ver1_l=2 00:10:47.863 14:12:39 -- scripts/common.sh@340 -- # ver2_l=1 00:10:47.863 14:12:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:47.863 14:12:39 -- scripts/common.sh@343 -- # case "$op" in 00:10:47.863 14:12:39 -- scripts/common.sh@344 -- # : 1 00:10:47.863 14:12:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:47.863 14:12:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.863 14:12:39 -- scripts/common.sh@364 -- # decimal 1 00:10:47.863 14:12:39 -- scripts/common.sh@352 -- # local d=1 00:10:47.863 14:12:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.863 14:12:39 -- scripts/common.sh@354 -- # echo 1 00:10:47.863 14:12:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:47.863 14:12:39 -- scripts/common.sh@365 -- # decimal 2 00:10:47.863 14:12:39 -- scripts/common.sh@352 -- # local d=2 00:10:47.863 14:12:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.863 14:12:39 -- scripts/common.sh@354 -- # echo 2 00:10:47.863 14:12:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:47.863 14:12:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:47.863 14:12:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:47.863 14:12:39 -- scripts/common.sh@367 -- # return 0 00:10:47.863 14:12:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.863 14:12:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.863 --rc genhtml_branch_coverage=1 00:10:47.863 --rc genhtml_function_coverage=1 00:10:47.863 --rc genhtml_legend=1 00:10:47.863 --rc geninfo_all_blocks=1 00:10:47.863 --rc geninfo_unexecuted_blocks=1 00:10:47.863 00:10:47.863 ' 00:10:47.863 14:12:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.863 --rc genhtml_branch_coverage=1 00:10:47.863 --rc genhtml_function_coverage=1 00:10:47.863 --rc genhtml_legend=1 00:10:47.863 --rc geninfo_all_blocks=1 00:10:47.863 --rc geninfo_unexecuted_blocks=1 00:10:47.863 00:10:47.863 ' 00:10:47.863 14:12:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.863 --rc genhtml_branch_coverage=1 00:10:47.863 --rc genhtml_function_coverage=1 00:10:47.863 --rc genhtml_legend=1 00:10:47.863 --rc geninfo_all_blocks=1 00:10:47.863 --rc geninfo_unexecuted_blocks=1 00:10:47.863 00:10:47.863 ' 00:10:47.863 14:12:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:47.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.863 --rc genhtml_branch_coverage=1 00:10:47.863 --rc genhtml_function_coverage=1 00:10:47.863 --rc genhtml_legend=1 00:10:47.863 --rc geninfo_all_blocks=1 00:10:47.863 --rc geninfo_unexecuted_blocks=1 00:10:47.863 00:10:47.863 ' 00:10:47.863 14:12:39 -- app/version.sh@17 -- # get_header_version major 00:10:47.863 14:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:47.863 14:12:39 -- app/version.sh@14 -- # cut -f2 00:10:47.863 14:12:39 -- app/version.sh@14 -- # tr -d '"' 00:10:47.863 14:12:39 -- app/version.sh@17 -- # major=24 00:10:47.863 14:12:39 -- app/version.sh@18 -- # get_header_version minor 00:10:47.863 14:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:47.863 14:12:39 -- app/version.sh@14 -- # cut -f2 00:10:47.863 14:12:39 -- app/version.sh@14 -- # tr -d '"' 00:10:47.863 14:12:39 -- app/version.sh@18 -- # minor=1 00:10:47.863 14:12:39 -- app/version.sh@19 -- # get_header_version patch 00:10:47.863 14:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:47.863 14:12:39 -- app/version.sh@14 -- # cut -f2 00:10:47.863 14:12:39 -- app/version.sh@14 -- # tr -d '"' 00:10:47.863 14:12:39 -- app/version.sh@19 -- # patch=1 00:10:47.863 14:12:39 -- app/version.sh@20 -- # get_header_version suffix 00:10:47.864 14:12:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:47.864 14:12:39 -- app/version.sh@14 -- # cut -f2 00:10:47.864 14:12:39 -- app/version.sh@14 -- # tr -d '"' 00:10:47.864 14:12:39 -- app/version.sh@20 -- # suffix=-pre 00:10:47.864 14:12:39 -- app/version.sh@22 -- # version=24.1 00:10:47.864 14:12:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:47.864 14:12:39 -- app/version.sh@25 -- # version=24.1.1 00:10:47.864 14:12:39 -- app/version.sh@28 -- # version=24.1.1rc0 00:10:47.864 14:12:39 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:47.864 14:12:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:48.123 14:12:39 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:10:48.123 14:12:39 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:10:48.123 00:10:48.123 real 0m0.224s 00:10:48.123 user 0m0.142s 00:10:48.123 sys 0m0.124s 00:10:48.123 14:12:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:48.123 14:12:39 -- common/autotest_common.sh@10 -- # set +x 00:10:48.123 ************************************ 00:10:48.123 END TEST version 00:10:48.123 ************************************ 00:10:48.123 14:12:39 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:10:48.123 14:12:39 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:10:48.123 14:12:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.123 14:12:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.123 14:12:39 -- common/autotest_common.sh@10 -- # set +x 00:10:48.123 ************************************ 00:10:48.123 START TEST blockdev_general 00:10:48.123 ************************************ 00:10:48.123 14:12:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:10:48.123 * Looking for test storage... 00:10:48.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:48.123 14:12:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:48.123 14:12:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:48.123 14:12:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:48.123 14:12:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:48.123 14:12:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:48.123 14:12:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:48.123 14:12:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:48.123 14:12:40 -- scripts/common.sh@335 -- # IFS=.-: 00:10:48.123 14:12:40 -- scripts/common.sh@335 -- # read -ra ver1 00:10:48.123 14:12:40 -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.123 14:12:40 -- scripts/common.sh@336 -- # read -ra ver2 00:10:48.123 14:12:40 -- scripts/common.sh@337 -- # local 'op=<' 00:10:48.123 14:12:40 -- scripts/common.sh@339 -- # ver1_l=2 00:10:48.123 14:12:40 -- scripts/common.sh@340 -- # ver2_l=1 00:10:48.123 14:12:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:48.123 14:12:40 -- scripts/common.sh@343 -- # case "$op" in 00:10:48.123 14:12:40 -- scripts/common.sh@344 -- # : 1 00:10:48.123 14:12:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:48.123 14:12:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.123 14:12:40 -- scripts/common.sh@364 -- # decimal 1 00:10:48.123 14:12:40 -- scripts/common.sh@352 -- # local d=1 00:10:48.123 14:12:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.123 14:12:40 -- scripts/common.sh@354 -- # echo 1 00:10:48.123 14:12:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:48.123 14:12:40 -- scripts/common.sh@365 -- # decimal 2 00:10:48.123 14:12:40 -- scripts/common.sh@352 -- # local d=2 00:10:48.123 14:12:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.123 14:12:40 -- scripts/common.sh@354 -- # echo 2 00:10:48.123 14:12:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:48.123 14:12:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:48.123 14:12:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:48.123 14:12:40 -- scripts/common.sh@367 -- # return 0 00:10:48.123 14:12:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.123 14:12:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:48.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.123 --rc genhtml_branch_coverage=1 00:10:48.123 --rc genhtml_function_coverage=1 00:10:48.123 --rc genhtml_legend=1 00:10:48.123 --rc geninfo_all_blocks=1 00:10:48.123 --rc geninfo_unexecuted_blocks=1 00:10:48.123 00:10:48.123 ' 00:10:48.123 14:12:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:48.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.123 --rc genhtml_branch_coverage=1 00:10:48.123 --rc genhtml_function_coverage=1 00:10:48.123 --rc genhtml_legend=1 00:10:48.123 --rc geninfo_all_blocks=1 00:10:48.123 --rc geninfo_unexecuted_blocks=1 00:10:48.123 00:10:48.123 ' 00:10:48.123 14:12:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:48.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.123 --rc genhtml_branch_coverage=1 00:10:48.123 --rc genhtml_function_coverage=1 00:10:48.123 --rc genhtml_legend=1 00:10:48.123 --rc geninfo_all_blocks=1 00:10:48.123 --rc geninfo_unexecuted_blocks=1 00:10:48.123 00:10:48.123 ' 00:10:48.123 14:12:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:48.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.123 --rc genhtml_branch_coverage=1 00:10:48.123 --rc genhtml_function_coverage=1 00:10:48.123 --rc genhtml_legend=1 00:10:48.123 --rc geninfo_all_blocks=1 00:10:48.123 --rc geninfo_unexecuted_blocks=1 00:10:48.123 00:10:48.123 ' 00:10:48.123 14:12:40 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:48.123 14:12:40 -- bdev/nbd_common.sh@6 -- # set -e 00:10:48.123 14:12:40 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:48.123 14:12:40 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:48.123 14:12:40 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:48.123 14:12:40 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:48.123 14:12:40 -- bdev/blockdev.sh@18 -- # : 00:10:48.123 14:12:40 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:48.123 14:12:40 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:48.123 14:12:40 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:48.123 14:12:40 -- bdev/blockdev.sh@672 -- # uname -s 00:10:48.123 14:12:40 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:10:48.123 14:12:40 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:10:48.123 14:12:40 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:10:48.123 14:12:40 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:48.123 14:12:40 -- bdev/blockdev.sh@682 -- # dek= 00:10:48.123 14:12:40 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:48.123 14:12:40 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:48.123 14:12:40 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:48.123 14:12:40 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:10:48.123 14:12:40 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:10:48.123 14:12:40 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:48.123 14:12:40 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=119225 00:10:48.123 14:12:40 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:48.123 14:12:40 -- bdev/blockdev.sh@47 -- # waitforlisten 119225 00:10:48.123 14:12:40 -- common/autotest_common.sh@829 -- # '[' -z 119225 ']' 00:10:48.123 14:12:40 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:10:48.123 14:12:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.123 14:12:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.123 14:12:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.123 14:12:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.123 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:10:48.382 [2024-11-18 14:12:40.241862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:48.382 [2024-11-18 14:12:40.242578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119225 ] 00:10:48.382 [2024-11-18 14:12:40.385336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.382 [2024-11-18 14:12:40.452658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:48.382 [2024-11-18 14:12:40.452921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.319 14:12:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.319 14:12:41 -- common/autotest_common.sh@862 -- # return 0 00:10:49.319 14:12:41 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:49.319 14:12:41 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:10:49.319 14:12:41 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:10:49.319 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.319 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.578 [2024-11-18 14:12:41.530247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:49.578 [2024-11-18 14:12:41.530363] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:49.578 00:10:49.578 [2024-11-18 14:12:41.538178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:49.578 [2024-11-18 14:12:41.538241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:49.578 00:10:49.578 Malloc0 00:10:49.578 Malloc1 00:10:49.578 Malloc2 00:10:49.578 Malloc3 00:10:49.578 Malloc4 00:10:49.837 Malloc5 00:10:49.837 Malloc6 00:10:49.837 Malloc7 00:10:49.837 Malloc8 00:10:49.837 Malloc9 00:10:49.837 [2024-11-18 14:12:41.742224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:49.837 [2024-11-18 14:12:41.742351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.837 [2024-11-18 14:12:41.742408] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:49.837 [2024-11-18 14:12:41.742446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.837 [2024-11-18 14:12:41.745011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.837 [2024-11-18 14:12:41.745094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:49.837 TestPT 00:10:49.837 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.837 14:12:41 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:10:49.837 5000+0 records in 00:10:49.837 5000+0 records out 00:10:49.837 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0279155 s, 367 MB/s 00:10:49.837 14:12:41 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:10:49.837 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.837 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.837 AIO0 00:10:49.837 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.837 14:12:41 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:49.837 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.837 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.837 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.837 14:12:41 -- bdev/blockdev.sh@738 -- # cat 00:10:49.837 14:12:41 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:49.837 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.837 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.837 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.837 14:12:41 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:49.837 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.837 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.837 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.097 14:12:41 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:50.097 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.097 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:50.097 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.097 14:12:41 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:50.097 14:12:41 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:50.097 14:12:41 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:50.097 14:12:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.097 14:12:41 -- common/autotest_common.sh@10 -- # set +x 00:10:50.097 14:12:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.097 14:12:42 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:50.097 14:12:42 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:50.098 14:12:42 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d0881076-e730-4ab5-86ef-f6736d72fafa"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d0881076-e730-4ab5-86ef-f6736d72fafa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4a949022-0a75-58bb-8e43-7acb6ddf8d92"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4a949022-0a75-58bb-8e43-7acb6ddf8d92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "468eb969-8265-5dbf-8b9d-2337f0c9e7ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "468eb969-8265-5dbf-8b9d-2337f0c9e7ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "fa75de9e-ea58-5364-92c5-27f680682831"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fa75de9e-ea58-5364-92c5-27f680682831",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b154af71-a5f7-5f88-9e23-9b05e28fcda8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b154af71-a5f7-5f88-9e23-9b05e28fcda8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8589296d-933a-56cb-902c-cb4e09e7ee35"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8589296d-933a-56cb-902c-cb4e09e7ee35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1722a258-8146-5b5e-b201-240c8e2389e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1722a258-8146-5b5e-b201-240c8e2389e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7438e5f6-e854-59f1-aa41-7089065714a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7438e5f6-e854-59f1-aa41-7089065714a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "06b8c853-5d7e-584a-9469-ad74781f7370"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06b8c853-5d7e-584a-9469-ad74781f7370",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d909c51f-87ed-5d36-bad5-d7fb007de6ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d909c51f-87ed-5d36-bad5-d7fb007de6ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a54465c1-dbfa-52b1-b5d9-37992ed826eb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a54465c1-dbfa-52b1-b5d9-37992ed826eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "013e915e-2070-5329-b1e3-369a1b348570"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "013e915e-2070-5329-b1e3-369a1b348570",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "910a0f46-0fef-4fad-a292-ba8234261b8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "910a0f46-0fef-4fad-a292-ba8234261b8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "910a0f46-0fef-4fad-a292-ba8234261b8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ec7e492c-9e34-43dc-aea3-305d2c25faa6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "29ba7dfd-5684-4327-a5f5-a332b0eeb681",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "24114fa6-66f3-4845-8ee8-8f9422d31325"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "24114fa6-66f3-4845-8ee8-8f9422d31325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "24114fa6-66f3-4845-8ee8-8f9422d31325",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "20e014b1-3f5d-4e83-9dbc-d6e69d1ac4ee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "596a0ad9-4551-47de-95cb-2b6fc97d0d40",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "56ab7422-dcd0-414c-b5ff-f2adabe783f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56ab7422-dcd0-414c-b5ff-f2adabe783f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "56ab7422-dcd0-414c-b5ff-f2adabe783f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "8370a061-2a43-4ff0-983d-502715fbb232",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "caeb10f2-192a-4a4b-9379-80d7b332f90f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a9194642-68cf-4257-bac1-bd188dd963b4"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a9194642-68cf-4257-bac1-bd188dd963b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:10:50.098 14:12:42 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:50.098 14:12:42 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:10:50.098 14:12:42 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:50.098 14:12:42 -- bdev/blockdev.sh@752 -- # killprocess 119225 00:10:50.098 14:12:42 -- common/autotest_common.sh@936 -- # '[' -z 119225 ']' 00:10:50.098 14:12:42 -- common/autotest_common.sh@940 -- # kill -0 119225 00:10:50.098 14:12:42 -- common/autotest_common.sh@941 -- # uname 00:10:50.098 14:12:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.098 14:12:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119225 00:10:50.098 14:12:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:50.098 14:12:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:50.098 killing process with pid 119225 00:10:50.098 14:12:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119225' 00:10:50.098 14:12:42 -- common/autotest_common.sh@955 -- # kill 119225 00:10:50.098 14:12:42 -- common/autotest_common.sh@960 -- # wait 119225 00:10:51.034 14:12:42 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:51.034 14:12:42 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:51.034 14:12:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:51.034 14:12:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.034 14:12:42 -- common/autotest_common.sh@10 -- # set +x 00:10:51.034 ************************************ 00:10:51.034 START TEST bdev_hello_world 00:10:51.034 ************************************ 00:10:51.034 14:12:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:51.034 [2024-11-18 14:12:42.862456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:51.034 [2024-11-18 14:12:42.862682] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119286 ] 00:10:51.034 [2024-11-18 14:12:43.010058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.034 [2024-11-18 14:12:43.073978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.310 [2024-11-18 14:12:43.242090] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:51.310 [2024-11-18 14:12:43.242236] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:51.310 [2024-11-18 14:12:43.249985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:51.310 [2024-11-18 14:12:43.250066] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:51.310 [2024-11-18 14:12:43.258029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:51.310 [2024-11-18 14:12:43.258107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:51.310 [2024-11-18 14:12:43.258151] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:51.310 [2024-11-18 14:12:43.366281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:51.310 [2024-11-18 14:12:43.366417] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.310 [2024-11-18 14:12:43.366492] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:51.310 [2024-11-18 14:12:43.366535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.310 [2024-11-18 14:12:43.369079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.310 [2024-11-18 14:12:43.369140] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:51.610 [2024-11-18 14:12:43.553131] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:51.610 [2024-11-18 14:12:43.553292] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:10:51.610 [2024-11-18 14:12:43.553494] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:51.610 [2024-11-18 14:12:43.553636] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:51.610 [2024-11-18 14:12:43.553859] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:51.610 [2024-11-18 14:12:43.553959] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:51.610 [2024-11-18 14:12:43.554084] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:51.610 00:10:51.610 [2024-11-18 14:12:43.554184] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:52.185 00:10:52.185 real 0m1.193s 00:10:52.185 user 0m0.695s 00:10:52.185 sys 0m0.352s 00:10:52.185 14:12:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:52.185 14:12:43 -- common/autotest_common.sh@10 -- # set +x 00:10:52.185 ************************************ 00:10:52.185 END TEST bdev_hello_world 00:10:52.185 ************************************ 00:10:52.185 14:12:44 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:52.185 14:12:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:52.185 14:12:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.185 14:12:44 -- common/autotest_common.sh@10 -- # set +x 00:10:52.185 ************************************ 00:10:52.185 START TEST bdev_bounds 00:10:52.185 ************************************ 00:10:52.185 14:12:44 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:10:52.185 14:12:44 -- bdev/blockdev.sh@288 -- # bdevio_pid=119329 00:10:52.186 14:12:44 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.186 14:12:44 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 119329' 00:10:52.186 Process bdevio pid: 119329 00:10:52.186 14:12:44 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:52.186 14:12:44 -- bdev/blockdev.sh@291 -- # waitforlisten 119329 00:10:52.186 14:12:44 -- common/autotest_common.sh@829 -- # '[' -z 119329 ']' 00:10:52.186 14:12:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.186 14:12:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.186 14:12:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.186 14:12:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.186 14:12:44 -- common/autotest_common.sh@10 -- # set +x 00:10:52.186 [2024-11-18 14:12:44.107766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:52.186 [2024-11-18 14:12:44.108458] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119329 ] 00:10:52.444 [2024-11-18 14:12:44.263715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.444 [2024-11-18 14:12:44.329448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.444 [2024-11-18 14:12:44.329570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.444 [2024-11-18 14:12:44.329588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.444 [2024-11-18 14:12:44.499884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:52.444 [2024-11-18 14:12:44.500006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:52.444 [2024-11-18 14:12:44.507757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:52.444 [2024-11-18 14:12:44.507849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:52.444 [2024-11-18 14:12:44.515856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:52.444 [2024-11-18 14:12:44.515983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:52.444 [2024-11-18 14:12:44.516074] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:52.703 [2024-11-18 14:12:44.623620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:52.703 [2024-11-18 14:12:44.623766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.703 [2024-11-18 14:12:44.623892] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:52.703 [2024-11-18 14:12:44.623942] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.703 [2024-11-18 14:12:44.626771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.703 [2024-11-18 14:12:44.626854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:53.271 14:12:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.271 14:12:45 -- common/autotest_common.sh@862 -- # return 0 00:10:53.271 14:12:45 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:53.271 I/O targets: 00:10:53.271 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:10:53.271 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:10:53.271 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:10:53.271 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:10:53.271 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:10:53.271 raid0: 131072 blocks of 512 bytes (64 MiB) 00:10:53.271 concat0: 131072 blocks of 512 bytes (64 MiB) 00:10:53.271 raid1: 65536 blocks of 512 bytes (32 MiB) 00:10:53.271 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:10:53.271 00:10:53.271 00:10:53.271 CUnit - A unit testing framework for C - Version 2.1-3 00:10:53.271 http://cunit.sourceforge.net/ 00:10:53.271 00:10:53.271 00:10:53.271 Suite: bdevio tests on: AIO0 00:10:53.271 Test: blockdev write read block ...passed 00:10:53.271 Test: blockdev write zeroes read block ...passed 00:10:53.271 Test: blockdev write zeroes read no split ...passed 00:10:53.271 Test: blockdev write zeroes read split ...passed 00:10:53.271 Test: blockdev write zeroes read split partial ...passed 00:10:53.271 Test: blockdev reset ...passed 00:10:53.271 Test: blockdev write read 8 blocks ...passed 00:10:53.271 Test: blockdev write read size > 128k ...passed 00:10:53.271 Test: blockdev write read invalid size ...passed 00:10:53.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.271 Test: blockdev write read max offset ...passed 00:10:53.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.271 Test: blockdev writev readv 8 blocks ...passed 00:10:53.271 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.271 Test: blockdev writev readv block ...passed 00:10:53.271 Test: blockdev writev readv size > 128k ...passed 00:10:53.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.271 Test: blockdev comparev and writev ...passed 00:10:53.271 Test: blockdev nvme passthru rw ...passed 00:10:53.271 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.271 Test: blockdev nvme admin passthru ...passed 00:10:53.271 Test: blockdev copy ...passed 00:10:53.271 Suite: bdevio tests on: raid1 00:10:53.271 Test: blockdev write read block ...passed 00:10:53.271 Test: blockdev write zeroes read block ...passed 00:10:53.271 Test: blockdev write zeroes read no split ...passed 00:10:53.271 Test: blockdev write zeroes read split ...passed 00:10:53.271 Test: blockdev write zeroes read split partial ...passed 00:10:53.271 Test: blockdev reset ...passed 00:10:53.271 Test: blockdev write read 8 blocks ...passed 00:10:53.271 Test: blockdev write read size > 128k ...passed 00:10:53.271 Test: blockdev write read invalid size ...passed 00:10:53.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.271 Test: blockdev write read max offset ...passed 00:10:53.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.271 Test: blockdev writev readv 8 blocks ...passed 00:10:53.271 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.271 Test: blockdev writev readv block ...passed 00:10:53.271 Test: blockdev writev readv size > 128k ...passed 00:10:53.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.271 Test: blockdev comparev and writev ...passed 00:10:53.271 Test: blockdev nvme passthru rw ...passed 00:10:53.271 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.271 Test: blockdev nvme admin passthru ...passed 00:10:53.271 Test: blockdev copy ...passed 00:10:53.271 Suite: bdevio tests on: concat0 00:10:53.271 Test: blockdev write read block ...passed 00:10:53.271 Test: blockdev write zeroes read block ...passed 00:10:53.271 Test: blockdev write zeroes read no split ...passed 00:10:53.271 Test: blockdev write zeroes read split ...passed 00:10:53.271 Test: blockdev write zeroes read split partial ...passed 00:10:53.271 Test: blockdev reset ...passed 00:10:53.271 Test: blockdev write read 8 blocks ...passed 00:10:53.271 Test: blockdev write read size > 128k ...passed 00:10:53.271 Test: blockdev write read invalid size ...passed 00:10:53.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.271 Test: blockdev write read max offset ...passed 00:10:53.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.271 Test: blockdev writev readv 8 blocks ...passed 00:10:53.271 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.271 Test: blockdev writev readv block ...passed 00:10:53.271 Test: blockdev writev readv size > 128k ...passed 00:10:53.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.271 Test: blockdev comparev and writev ...passed 00:10:53.271 Test: blockdev nvme passthru rw ...passed 00:10:53.271 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.271 Test: blockdev nvme admin passthru ...passed 00:10:53.271 Test: blockdev copy ...passed 00:10:53.271 Suite: bdevio tests on: raid0 00:10:53.271 Test: blockdev write read block ...passed 00:10:53.271 Test: blockdev write zeroes read block ...passed 00:10:53.271 Test: blockdev write zeroes read no split ...passed 00:10:53.271 Test: blockdev write zeroes read split ...passed 00:10:53.271 Test: blockdev write zeroes read split partial ...passed 00:10:53.271 Test: blockdev reset ...passed 00:10:53.271 Test: blockdev write read 8 blocks ...passed 00:10:53.271 Test: blockdev write read size > 128k ...passed 00:10:53.271 Test: blockdev write read invalid size ...passed 00:10:53.271 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.271 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.271 Test: blockdev write read max offset ...passed 00:10:53.271 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.271 Test: blockdev writev readv 8 blocks ...passed 00:10:53.271 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.271 Test: blockdev writev readv block ...passed 00:10:53.271 Test: blockdev writev readv size > 128k ...passed 00:10:53.271 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.272 Test: blockdev comparev and writev ...passed 00:10:53.272 Test: blockdev nvme passthru rw ...passed 00:10:53.272 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.272 Test: blockdev nvme admin passthru ...passed 00:10:53.272 Test: blockdev copy ...passed 00:10:53.272 Suite: bdevio tests on: TestPT 00:10:53.272 Test: blockdev write read block ...passed 00:10:53.272 Test: blockdev write zeroes read block ...passed 00:10:53.272 Test: blockdev write zeroes read no split ...passed 00:10:53.272 Test: blockdev write zeroes read split ...passed 00:10:53.272 Test: blockdev write zeroes read split partial ...passed 00:10:53.272 Test: blockdev reset ...passed 00:10:53.272 Test: blockdev write read 8 blocks ...passed 00:10:53.272 Test: blockdev write read size > 128k ...passed 00:10:53.272 Test: blockdev write read invalid size ...passed 00:10:53.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.272 Test: blockdev write read max offset ...passed 00:10:53.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.272 Test: blockdev writev readv 8 blocks ...passed 00:10:53.272 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.272 Test: blockdev writev readv block ...passed 00:10:53.272 Test: blockdev writev readv size > 128k ...passed 00:10:53.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.272 Test: blockdev comparev and writev ...passed 00:10:53.272 Test: blockdev nvme passthru rw ...passed 00:10:53.272 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.272 Test: blockdev nvme admin passthru ...passed 00:10:53.272 Test: blockdev copy ...passed 00:10:53.272 Suite: bdevio tests on: Malloc2p7 00:10:53.272 Test: blockdev write read block ...passed 00:10:53.272 Test: blockdev write zeroes read block ...passed 00:10:53.272 Test: blockdev write zeroes read no split ...passed 00:10:53.272 Test: blockdev write zeroes read split ...passed 00:10:53.272 Test: blockdev write zeroes read split partial ...passed 00:10:53.272 Test: blockdev reset ...passed 00:10:53.272 Test: blockdev write read 8 blocks ...passed 00:10:53.272 Test: blockdev write read size > 128k ...passed 00:10:53.272 Test: blockdev write read invalid size ...passed 00:10:53.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.272 Test: blockdev write read max offset ...passed 00:10:53.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.272 Test: blockdev writev readv 8 blocks ...passed 00:10:53.272 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.272 Test: blockdev writev readv block ...passed 00:10:53.272 Test: blockdev writev readv size > 128k ...passed 00:10:53.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.272 Test: blockdev comparev and writev ...passed 00:10:53.272 Test: blockdev nvme passthru rw ...passed 00:10:53.272 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.272 Test: blockdev nvme admin passthru ...passed 00:10:53.272 Test: blockdev copy ...passed 00:10:53.272 Suite: bdevio tests on: Malloc2p6 00:10:53.272 Test: blockdev write read block ...passed 00:10:53.272 Test: blockdev write zeroes read block ...passed 00:10:53.272 Test: blockdev write zeroes read no split ...passed 00:10:53.272 Test: blockdev write zeroes read split ...passed 00:10:53.272 Test: blockdev write zeroes read split partial ...passed 00:10:53.272 Test: blockdev reset ...passed 00:10:53.272 Test: blockdev write read 8 blocks ...passed 00:10:53.272 Test: blockdev write read size > 128k ...passed 00:10:53.272 Test: blockdev write read invalid size ...passed 00:10:53.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.272 Test: blockdev write read max offset ...passed 00:10:53.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.272 Test: blockdev writev readv 8 blocks ...passed 00:10:53.272 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.272 Test: blockdev writev readv block ...passed 00:10:53.272 Test: blockdev writev readv size > 128k ...passed 00:10:53.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.272 Test: blockdev comparev and writev ...passed 00:10:53.272 Test: blockdev nvme passthru rw ...passed 00:10:53.272 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.272 Test: blockdev nvme admin passthru ...passed 00:10:53.272 Test: blockdev copy ...passed 00:10:53.272 Suite: bdevio tests on: Malloc2p5 00:10:53.272 Test: blockdev write read block ...passed 00:10:53.272 Test: blockdev write zeroes read block ...passed 00:10:53.272 Test: blockdev write zeroes read no split ...passed 00:10:53.531 Test: blockdev write zeroes read split ...passed 00:10:53.531 Test: blockdev write zeroes read split partial ...passed 00:10:53.531 Test: blockdev reset ...passed 00:10:53.531 Test: blockdev write read 8 blocks ...passed 00:10:53.531 Test: blockdev write read size > 128k ...passed 00:10:53.531 Test: blockdev write read invalid size ...passed 00:10:53.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.531 Test: blockdev write read max offset ...passed 00:10:53.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.531 Test: blockdev writev readv 8 blocks ...passed 00:10:53.531 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.531 Test: blockdev writev readv block ...passed 00:10:53.531 Test: blockdev writev readv size > 128k ...passed 00:10:53.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.531 Test: blockdev comparev and writev ...passed 00:10:53.531 Test: blockdev nvme passthru rw ...passed 00:10:53.531 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.531 Test: blockdev nvme admin passthru ...passed 00:10:53.531 Test: blockdev copy ...passed 00:10:53.531 Suite: bdevio tests on: Malloc2p4 00:10:53.531 Test: blockdev write read block ...passed 00:10:53.531 Test: blockdev write zeroes read block ...passed 00:10:53.531 Test: blockdev write zeroes read no split ...passed 00:10:53.531 Test: blockdev write zeroes read split ...passed 00:10:53.531 Test: blockdev write zeroes read split partial ...passed 00:10:53.531 Test: blockdev reset ...passed 00:10:53.531 Test: blockdev write read 8 blocks ...passed 00:10:53.531 Test: blockdev write read size > 128k ...passed 00:10:53.531 Test: blockdev write read invalid size ...passed 00:10:53.531 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.531 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.531 Test: blockdev write read max offset ...passed 00:10:53.531 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.531 Test: blockdev writev readv 8 blocks ...passed 00:10:53.531 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.531 Test: blockdev writev readv block ...passed 00:10:53.531 Test: blockdev writev readv size > 128k ...passed 00:10:53.531 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.531 Test: blockdev comparev and writev ...passed 00:10:53.531 Test: blockdev nvme passthru rw ...passed 00:10:53.531 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.531 Test: blockdev nvme admin passthru ...passed 00:10:53.531 Test: blockdev copy ...passed 00:10:53.531 Suite: bdevio tests on: Malloc2p3 00:10:53.532 Test: blockdev write read block ...passed 00:10:53.532 Test: blockdev write zeroes read block ...passed 00:10:53.532 Test: blockdev write zeroes read no split ...passed 00:10:53.532 Test: blockdev write zeroes read split ...passed 00:10:53.532 Test: blockdev write zeroes read split partial ...passed 00:10:53.532 Test: blockdev reset ...passed 00:10:53.532 Test: blockdev write read 8 blocks ...passed 00:10:53.532 Test: blockdev write read size > 128k ...passed 00:10:53.532 Test: blockdev write read invalid size ...passed 00:10:53.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.532 Test: blockdev write read max offset ...passed 00:10:53.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.532 Test: blockdev writev readv 8 blocks ...passed 00:10:53.532 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.532 Test: blockdev writev readv block ...passed 00:10:53.532 Test: blockdev writev readv size > 128k ...passed 00:10:53.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.532 Test: blockdev comparev and writev ...passed 00:10:53.532 Test: blockdev nvme passthru rw ...passed 00:10:53.532 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.532 Test: blockdev nvme admin passthru ...passed 00:10:53.532 Test: blockdev copy ...passed 00:10:53.532 Suite: bdevio tests on: Malloc2p2 00:10:53.532 Test: blockdev write read block ...passed 00:10:53.532 Test: blockdev write zeroes read block ...passed 00:10:53.532 Test: blockdev write zeroes read no split ...passed 00:10:53.532 Test: blockdev write zeroes read split ...passed 00:10:53.532 Test: blockdev write zeroes read split partial ...passed 00:10:53.532 Test: blockdev reset ...passed 00:10:53.532 Test: blockdev write read 8 blocks ...passed 00:10:53.532 Test: blockdev write read size > 128k ...passed 00:10:53.532 Test: blockdev write read invalid size ...passed 00:10:53.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.532 Test: blockdev write read max offset ...passed 00:10:53.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.532 Test: blockdev writev readv 8 blocks ...passed 00:10:53.532 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.532 Test: blockdev writev readv block ...passed 00:10:53.532 Test: blockdev writev readv size > 128k ...passed 00:10:53.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.532 Test: blockdev comparev and writev ...passed 00:10:53.532 Test: blockdev nvme passthru rw ...passed 00:10:53.532 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.532 Test: blockdev nvme admin passthru ...passed 00:10:53.532 Test: blockdev copy ...passed 00:10:53.532 Suite: bdevio tests on: Malloc2p1 00:10:53.532 Test: blockdev write read block ...passed 00:10:53.532 Test: blockdev write zeroes read block ...passed 00:10:53.532 Test: blockdev write zeroes read no split ...passed 00:10:53.532 Test: blockdev write zeroes read split ...passed 00:10:53.532 Test: blockdev write zeroes read split partial ...passed 00:10:53.532 Test: blockdev reset ...passed 00:10:53.532 Test: blockdev write read 8 blocks ...passed 00:10:53.532 Test: blockdev write read size > 128k ...passed 00:10:53.532 Test: blockdev write read invalid size ...passed 00:10:53.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.532 Test: blockdev write read max offset ...passed 00:10:53.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.532 Test: blockdev writev readv 8 blocks ...passed 00:10:53.532 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.532 Test: blockdev writev readv block ...passed 00:10:53.532 Test: blockdev writev readv size > 128k ...passed 00:10:53.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.532 Test: blockdev comparev and writev ...passed 00:10:53.532 Test: blockdev nvme passthru rw ...passed 00:10:53.532 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.532 Test: blockdev nvme admin passthru ...passed 00:10:53.532 Test: blockdev copy ...passed 00:10:53.532 Suite: bdevio tests on: Malloc2p0 00:10:53.532 Test: blockdev write read block ...passed 00:10:53.532 Test: blockdev write zeroes read block ...passed 00:10:53.532 Test: blockdev write zeroes read no split ...passed 00:10:53.532 Test: blockdev write zeroes read split ...passed 00:10:53.532 Test: blockdev write zeroes read split partial ...passed 00:10:53.532 Test: blockdev reset ...passed 00:10:53.532 Test: blockdev write read 8 blocks ...passed 00:10:53.532 Test: blockdev write read size > 128k ...passed 00:10:53.532 Test: blockdev write read invalid size ...passed 00:10:53.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.532 Test: blockdev write read max offset ...passed 00:10:53.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.532 Test: blockdev writev readv 8 blocks ...passed 00:10:53.532 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.532 Test: blockdev writev readv block ...passed 00:10:53.532 Test: blockdev writev readv size > 128k ...passed 00:10:53.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.532 Test: blockdev comparev and writev ...passed 00:10:53.532 Test: blockdev nvme passthru rw ...passed 00:10:53.532 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.532 Test: blockdev nvme admin passthru ...passed 00:10:53.532 Test: blockdev copy ...passed 00:10:53.532 Suite: bdevio tests on: Malloc1p1 00:10:53.532 Test: blockdev write read block ...passed 00:10:53.532 Test: blockdev write zeroes read block ...passed 00:10:53.532 Test: blockdev write zeroes read no split ...passed 00:10:53.532 Test: blockdev write zeroes read split ...passed 00:10:53.532 Test: blockdev write zeroes read split partial ...passed 00:10:53.532 Test: blockdev reset ...passed 00:10:53.532 Test: blockdev write read 8 blocks ...passed 00:10:53.532 Test: blockdev write read size > 128k ...passed 00:10:53.532 Test: blockdev write read invalid size ...passed 00:10:53.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.532 Test: blockdev write read max offset ...passed 00:10:53.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.532 Test: blockdev writev readv 8 blocks ...passed 00:10:53.532 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.532 Test: blockdev writev readv block ...passed 00:10:53.532 Test: blockdev writev readv size > 128k ...passed 00:10:53.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.532 Test: blockdev comparev and writev ...passed 00:10:53.532 Test: blockdev nvme passthru rw ...passed 00:10:53.532 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.532 Test: blockdev nvme admin passthru ...passed 00:10:53.532 Test: blockdev copy ...passed 00:10:53.532 Suite: bdevio tests on: Malloc1p0 00:10:53.532 Test: blockdev write read block ...passed 00:10:53.532 Test: blockdev write zeroes read block ...passed 00:10:53.532 Test: blockdev write zeroes read no split ...passed 00:10:53.532 Test: blockdev write zeroes read split ...passed 00:10:53.532 Test: blockdev write zeroes read split partial ...passed 00:10:53.532 Test: blockdev reset ...passed 00:10:53.532 Test: blockdev write read 8 blocks ...passed 00:10:53.532 Test: blockdev write read size > 128k ...passed 00:10:53.532 Test: blockdev write read invalid size ...passed 00:10:53.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.532 Test: blockdev write read max offset ...passed 00:10:53.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.533 Test: blockdev writev readv 8 blocks ...passed 00:10:53.533 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.533 Test: blockdev writev readv block ...passed 00:10:53.533 Test: blockdev writev readv size > 128k ...passed 00:10:53.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.533 Test: blockdev comparev and writev ...passed 00:10:53.533 Test: blockdev nvme passthru rw ...passed 00:10:53.533 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.533 Test: blockdev nvme admin passthru ...passed 00:10:53.533 Test: blockdev copy ...passed 00:10:53.533 Suite: bdevio tests on: Malloc0 00:10:53.533 Test: blockdev write read block ...passed 00:10:53.533 Test: blockdev write zeroes read block ...passed 00:10:53.533 Test: blockdev write zeroes read no split ...passed 00:10:53.533 Test: blockdev write zeroes read split ...passed 00:10:53.533 Test: blockdev write zeroes read split partial ...passed 00:10:53.533 Test: blockdev reset ...passed 00:10:53.533 Test: blockdev write read 8 blocks ...passed 00:10:53.533 Test: blockdev write read size > 128k ...passed 00:10:53.533 Test: blockdev write read invalid size ...passed 00:10:53.533 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:53.533 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:53.533 Test: blockdev write read max offset ...passed 00:10:53.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:53.533 Test: blockdev writev readv 8 blocks ...passed 00:10:53.533 Test: blockdev writev readv 30 x 1block ...passed 00:10:53.533 Test: blockdev writev readv block ...passed 00:10:53.533 Test: blockdev writev readv size > 128k ...passed 00:10:53.533 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:53.533 Test: blockdev comparev and writev ...passed 00:10:53.533 Test: blockdev nvme passthru rw ...passed 00:10:53.533 Test: blockdev nvme passthru vendor specific ...passed 00:10:53.533 Test: blockdev nvme admin passthru ...passed 00:10:53.533 Test: blockdev copy ...passed 00:10:53.533 00:10:53.533 Run Summary: Type Total Ran Passed Failed Inactive 00:10:53.533 suites 16 16 n/a 0 0 00:10:53.533 tests 368 368 368 0 0 00:10:53.533 asserts 2224 2224 2224 0 n/a 00:10:53.533 00:10:53.533 Elapsed time = 0.676 seconds 00:10:53.533 0 00:10:53.533 14:12:45 -- bdev/blockdev.sh@293 -- # killprocess 119329 00:10:53.533 14:12:45 -- common/autotest_common.sh@936 -- # '[' -z 119329 ']' 00:10:53.533 14:12:45 -- common/autotest_common.sh@940 -- # kill -0 119329 00:10:53.533 14:12:45 -- common/autotest_common.sh@941 -- # uname 00:10:53.533 14:12:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.533 14:12:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119329 00:10:53.533 14:12:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:53.533 14:12:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:53.533 killing process with pid 119329 00:10:53.533 14:12:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119329' 00:10:53.533 14:12:45 -- common/autotest_common.sh@955 -- # kill 119329 00:10:53.533 14:12:45 -- common/autotest_common.sh@960 -- # wait 119329 00:10:54.101 14:12:45 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:54.101 00:10:54.101 real 0m1.879s 00:10:54.101 user 0m4.507s 00:10:54.101 sys 0m0.491s 00:10:54.101 14:12:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:54.101 ************************************ 00:10:54.101 14:12:45 -- common/autotest_common.sh@10 -- # set +x 00:10:54.101 END TEST bdev_bounds 00:10:54.101 ************************************ 00:10:54.101 14:12:45 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:54.101 14:12:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:10:54.101 14:12:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.101 14:12:45 -- common/autotest_common.sh@10 -- # set +x 00:10:54.101 ************************************ 00:10:54.101 START TEST bdev_nbd 00:10:54.101 ************************************ 00:10:54.101 14:12:45 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:10:54.101 14:12:45 -- bdev/blockdev.sh@298 -- # uname -s 00:10:54.101 14:12:45 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:10:54.101 14:12:45 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.101 14:12:45 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:54.101 14:12:45 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:54.101 14:12:45 -- bdev/blockdev.sh@302 -- # local bdev_all 00:10:54.101 14:12:45 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:10:54.101 14:12:45 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:10:54.101 14:12:45 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:54.101 14:12:45 -- bdev/blockdev.sh@309 -- # local nbd_all 00:10:54.101 14:12:45 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:10:54.101 14:12:45 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:54.101 14:12:45 -- bdev/blockdev.sh@312 -- # local nbd_list 00:10:54.101 14:12:45 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:54.101 14:12:45 -- bdev/blockdev.sh@313 -- # local bdev_list 00:10:54.101 14:12:45 -- bdev/blockdev.sh@316 -- # nbd_pid=119387 00:10:54.101 14:12:45 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:54.101 14:12:45 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:54.101 14:12:45 -- bdev/blockdev.sh@318 -- # waitforlisten 119387 /var/tmp/spdk-nbd.sock 00:10:54.101 14:12:45 -- common/autotest_common.sh@829 -- # '[' -z 119387 ']' 00:10:54.101 14:12:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:54.101 14:12:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.101 14:12:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:54.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:54.101 14:12:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.101 14:12:45 -- common/autotest_common.sh@10 -- # set +x 00:10:54.101 [2024-11-18 14:12:46.040402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:54.101 [2024-11-18 14:12:46.040600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.360 [2024-11-18 14:12:46.176815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.360 [2024-11-18 14:12:46.243711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.360 [2024-11-18 14:12:46.411826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:54.360 [2024-11-18 14:12:46.411969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:54.360 [2024-11-18 14:12:46.419729] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:54.360 [2024-11-18 14:12:46.419809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:54.360 [2024-11-18 14:12:46.427767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:54.360 [2024-11-18 14:12:46.427847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:54.360 [2024-11-18 14:12:46.427897] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:54.618 [2024-11-18 14:12:46.534080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:54.618 [2024-11-18 14:12:46.534216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.618 [2024-11-18 14:12:46.534292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:54.618 [2024-11-18 14:12:46.534334] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.618 [2024-11-18 14:12:46.536819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.619 [2024-11-18 14:12:46.536889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:54.877 14:12:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.877 14:12:46 -- common/autotest_common.sh@862 -- # return 0 00:10:54.877 14:12:46 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@24 -- # local i 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:54.877 14:12:46 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:10:55.137 14:12:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:55.137 14:12:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:55.137 14:12:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:55.137 14:12:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:55.137 14:12:47 -- common/autotest_common.sh@867 -- # local i 00:10:55.137 14:12:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:55.137 14:12:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:55.137 14:12:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:55.137 14:12:47 -- common/autotest_common.sh@871 -- # break 00:10:55.137 14:12:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:55.137 14:12:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:55.137 14:12:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.137 1+0 records in 00:10:55.137 1+0 records out 00:10:55.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238845 s, 17.1 MB/s 00:10:55.137 14:12:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.137 14:12:47 -- common/autotest_common.sh@884 -- # size=4096 00:10:55.137 14:12:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.137 14:12:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:55.137 14:12:47 -- common/autotest_common.sh@887 -- # return 0 00:10:55.137 14:12:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.137 14:12:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:55.137 14:12:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:10:55.395 14:12:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:55.395 14:12:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:55.395 14:12:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:55.395 14:12:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:55.395 14:12:47 -- common/autotest_common.sh@867 -- # local i 00:10:55.395 14:12:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:55.395 14:12:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:55.395 14:12:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:55.395 14:12:47 -- common/autotest_common.sh@871 -- # break 00:10:55.395 14:12:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:55.395 14:12:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:55.395 14:12:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.395 1+0 records in 00:10:55.395 1+0 records out 00:10:55.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363646 s, 11.3 MB/s 00:10:55.395 14:12:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.653 14:12:47 -- common/autotest_common.sh@884 -- # size=4096 00:10:55.653 14:12:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.653 14:12:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:55.653 14:12:47 -- common/autotest_common.sh@887 -- # return 0 00:10:55.653 14:12:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.653 14:12:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:55.653 14:12:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:10:55.653 14:12:47 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:55.653 14:12:47 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:55.653 14:12:47 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:55.653 14:12:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:10:55.653 14:12:47 -- common/autotest_common.sh@867 -- # local i 00:10:55.653 14:12:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:55.653 14:12:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:55.653 14:12:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:10:55.653 14:12:47 -- common/autotest_common.sh@871 -- # break 00:10:55.653 14:12:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:55.653 14:12:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:55.653 14:12:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:55.912 1+0 records in 00:10:55.912 1+0 records out 00:10:55.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115914 s, 3.5 MB/s 00:10:55.912 14:12:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.912 14:12:47 -- common/autotest_common.sh@884 -- # size=4096 00:10:55.912 14:12:47 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:55.912 14:12:47 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:55.912 14:12:47 -- common/autotest_common.sh@887 -- # return 0 00:10:55.912 14:12:47 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:55.912 14:12:47 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:55.912 14:12:47 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:10:56.170 14:12:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:56.170 14:12:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:56.170 14:12:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:56.170 14:12:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:10:56.170 14:12:48 -- common/autotest_common.sh@867 -- # local i 00:10:56.170 14:12:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:56.170 14:12:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:56.170 14:12:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:10:56.170 14:12:48 -- common/autotest_common.sh@871 -- # break 00:10:56.170 14:12:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:56.170 14:12:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:56.170 14:12:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.170 1+0 records in 00:10:56.170 1+0 records out 00:10:56.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401849 s, 10.2 MB/s 00:10:56.170 14:12:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.170 14:12:48 -- common/autotest_common.sh@884 -- # size=4096 00:10:56.170 14:12:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.170 14:12:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:56.170 14:12:48 -- common/autotest_common.sh@887 -- # return 0 00:10:56.170 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.170 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:56.170 14:12:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:10:56.429 14:12:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:56.429 14:12:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:56.429 14:12:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:56.429 14:12:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:10:56.429 14:12:48 -- common/autotest_common.sh@867 -- # local i 00:10:56.429 14:12:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:56.429 14:12:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:56.429 14:12:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:10:56.429 14:12:48 -- common/autotest_common.sh@871 -- # break 00:10:56.429 14:12:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:56.429 14:12:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:56.429 14:12:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.429 1+0 records in 00:10:56.429 1+0 records out 00:10:56.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356636 s, 11.5 MB/s 00:10:56.429 14:12:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.429 14:12:48 -- common/autotest_common.sh@884 -- # size=4096 00:10:56.429 14:12:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.429 14:12:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:56.429 14:12:48 -- common/autotest_common.sh@887 -- # return 0 00:10:56.429 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.429 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:56.429 14:12:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:10:56.688 14:12:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:56.688 14:12:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:56.688 14:12:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:56.688 14:12:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:10:56.688 14:12:48 -- common/autotest_common.sh@867 -- # local i 00:10:56.688 14:12:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:56.688 14:12:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:56.688 14:12:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:10:56.688 14:12:48 -- common/autotest_common.sh@871 -- # break 00:10:56.688 14:12:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:56.688 14:12:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:56.688 14:12:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.688 1+0 records in 00:10:56.688 1+0 records out 00:10:56.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469292 s, 8.7 MB/s 00:10:56.688 14:12:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.688 14:12:48 -- common/autotest_common.sh@884 -- # size=4096 00:10:56.688 14:12:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.688 14:12:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:56.688 14:12:48 -- common/autotest_common.sh@887 -- # return 0 00:10:56.688 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.688 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:56.688 14:12:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:10:56.948 14:12:48 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:56.948 14:12:48 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:56.948 14:12:48 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:56.948 14:12:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:10:56.948 14:12:48 -- common/autotest_common.sh@867 -- # local i 00:10:56.948 14:12:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:56.948 14:12:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:56.948 14:12:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:10:56.948 14:12:48 -- common/autotest_common.sh@871 -- # break 00:10:56.948 14:12:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:56.948 14:12:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:56.948 14:12:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:56.948 1+0 records in 00:10:56.948 1+0 records out 00:10:56.948 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470901 s, 8.7 MB/s 00:10:56.948 14:12:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.948 14:12:48 -- common/autotest_common.sh@884 -- # size=4096 00:10:56.948 14:12:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:56.948 14:12:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:56.948 14:12:48 -- common/autotest_common.sh@887 -- # return 0 00:10:56.948 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:56.948 14:12:48 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:56.948 14:12:48 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:10:57.207 14:12:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:10:57.207 14:12:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:10:57.207 14:12:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:10:57.207 14:12:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:10:57.207 14:12:49 -- common/autotest_common.sh@867 -- # local i 00:10:57.207 14:12:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:57.207 14:12:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:57.207 14:12:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:10:57.207 14:12:49 -- common/autotest_common.sh@871 -- # break 00:10:57.207 14:12:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:57.207 14:12:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:57.207 14:12:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.207 1+0 records in 00:10:57.207 1+0 records out 00:10:57.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866858 s, 4.7 MB/s 00:10:57.207 14:12:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.207 14:12:49 -- common/autotest_common.sh@884 -- # size=4096 00:10:57.207 14:12:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.207 14:12:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:57.207 14:12:49 -- common/autotest_common.sh@887 -- # return 0 00:10:57.207 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.207 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:57.207 14:12:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:10:57.465 14:12:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:10:57.465 14:12:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:10:57.465 14:12:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:10:57.465 14:12:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:10:57.465 14:12:49 -- common/autotest_common.sh@867 -- # local i 00:10:57.465 14:12:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:57.465 14:12:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:57.465 14:12:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:10:57.465 14:12:49 -- common/autotest_common.sh@871 -- # break 00:10:57.465 14:12:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:57.465 14:12:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:57.465 14:12:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.465 1+0 records in 00:10:57.465 1+0 records out 00:10:57.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509483 s, 8.0 MB/s 00:10:57.465 14:12:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.465 14:12:49 -- common/autotest_common.sh@884 -- # size=4096 00:10:57.465 14:12:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.465 14:12:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:57.465 14:12:49 -- common/autotest_common.sh@887 -- # return 0 00:10:57.465 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.465 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:57.465 14:12:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:10:57.724 14:12:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:10:57.724 14:12:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:10:57.724 14:12:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:10:57.724 14:12:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:10:57.724 14:12:49 -- common/autotest_common.sh@867 -- # local i 00:10:57.724 14:12:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:57.724 14:12:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:57.724 14:12:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:10:57.724 14:12:49 -- common/autotest_common.sh@871 -- # break 00:10:57.724 14:12:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:57.724 14:12:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:57.724 14:12:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.724 1+0 records in 00:10:57.724 1+0 records out 00:10:57.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423703 s, 9.7 MB/s 00:10:57.724 14:12:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.724 14:12:49 -- common/autotest_common.sh@884 -- # size=4096 00:10:57.724 14:12:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.724 14:12:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:57.724 14:12:49 -- common/autotest_common.sh@887 -- # return 0 00:10:57.724 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.724 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:57.724 14:12:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:10:57.983 14:12:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:10:57.983 14:12:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:10:57.983 14:12:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:10:57.983 14:12:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:10:57.983 14:12:49 -- common/autotest_common.sh@867 -- # local i 00:10:57.983 14:12:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:57.983 14:12:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:57.983 14:12:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:10:57.983 14:12:49 -- common/autotest_common.sh@871 -- # break 00:10:57.983 14:12:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:57.983 14:12:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:57.983 14:12:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:57.983 1+0 records in 00:10:57.983 1+0 records out 00:10:57.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741432 s, 5.5 MB/s 00:10:57.983 14:12:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.983 14:12:49 -- common/autotest_common.sh@884 -- # size=4096 00:10:57.983 14:12:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:57.983 14:12:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:57.983 14:12:49 -- common/autotest_common.sh@887 -- # return 0 00:10:57.983 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:57.983 14:12:49 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:57.983 14:12:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:10:58.242 14:12:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:10:58.242 14:12:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:10:58.242 14:12:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:10:58.242 14:12:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:10:58.242 14:12:50 -- common/autotest_common.sh@867 -- # local i 00:10:58.242 14:12:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:58.242 14:12:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:58.242 14:12:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:10:58.242 14:12:50 -- common/autotest_common.sh@871 -- # break 00:10:58.242 14:12:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:58.242 14:12:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:58.242 14:12:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.242 1+0 records in 00:10:58.242 1+0 records out 00:10:58.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075587 s, 5.4 MB/s 00:10:58.242 14:12:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.242 14:12:50 -- common/autotest_common.sh@884 -- # size=4096 00:10:58.242 14:12:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.242 14:12:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:58.242 14:12:50 -- common/autotest_common.sh@887 -- # return 0 00:10:58.242 14:12:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:58.242 14:12:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:58.242 14:12:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:10:58.501 14:12:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:10:58.501 14:12:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:10:58.501 14:12:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:10:58.501 14:12:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:10:58.501 14:12:50 -- common/autotest_common.sh@867 -- # local i 00:10:58.501 14:12:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:58.501 14:12:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:58.501 14:12:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:10:58.501 14:12:50 -- common/autotest_common.sh@871 -- # break 00:10:58.501 14:12:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:58.501 14:12:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:58.501 14:12:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.501 1+0 records in 00:10:58.501 1+0 records out 00:10:58.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722688 s, 5.7 MB/s 00:10:58.501 14:12:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.501 14:12:50 -- common/autotest_common.sh@884 -- # size=4096 00:10:58.501 14:12:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.501 14:12:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:58.501 14:12:50 -- common/autotest_common.sh@887 -- # return 0 00:10:58.501 14:12:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:58.501 14:12:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:58.501 14:12:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:10:58.760 14:12:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:10:58.760 14:12:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:10:58.760 14:12:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:10:58.760 14:12:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:10:58.760 14:12:50 -- common/autotest_common.sh@867 -- # local i 00:10:58.760 14:12:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:58.760 14:12:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:58.760 14:12:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:10:58.760 14:12:50 -- common/autotest_common.sh@871 -- # break 00:10:58.760 14:12:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:58.760 14:12:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:58.760 14:12:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:58.760 1+0 records in 00:10:58.760 1+0 records out 00:10:58.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674331 s, 6.1 MB/s 00:10:58.760 14:12:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.760 14:12:50 -- common/autotest_common.sh@884 -- # size=4096 00:10:58.760 14:12:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:58.760 14:12:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:58.760 14:12:50 -- common/autotest_common.sh@887 -- # return 0 00:10:58.760 14:12:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:58.760 14:12:50 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:58.760 14:12:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:10:59.019 14:12:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:10:59.019 14:12:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:10:59.019 14:12:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:10:59.019 14:12:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:10:59.019 14:12:51 -- common/autotest_common.sh@867 -- # local i 00:10:59.019 14:12:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:59.019 14:12:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:59.019 14:12:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:10:59.019 14:12:51 -- common/autotest_common.sh@871 -- # break 00:10:59.019 14:12:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:59.019 14:12:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:59.019 14:12:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.019 1+0 records in 00:10:59.019 1+0 records out 00:10:59.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681125 s, 6.0 MB/s 00:10:59.019 14:12:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.019 14:12:51 -- common/autotest_common.sh@884 -- # size=4096 00:10:59.019 14:12:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.019 14:12:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:59.019 14:12:51 -- common/autotest_common.sh@887 -- # return 0 00:10:59.019 14:12:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.019 14:12:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:59.019 14:12:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:10:59.278 14:12:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:10:59.278 14:12:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:10:59.278 14:12:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:10:59.278 14:12:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:10:59.278 14:12:51 -- common/autotest_common.sh@867 -- # local i 00:10:59.278 14:12:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:59.278 14:12:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:59.278 14:12:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:10:59.278 14:12:51 -- common/autotest_common.sh@871 -- # break 00:10:59.278 14:12:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:59.278 14:12:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:59.278 14:12:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:59.278 1+0 records in 00:10:59.278 1+0 records out 00:10:59.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879024 s, 4.7 MB/s 00:10:59.278 14:12:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.278 14:12:51 -- common/autotest_common.sh@884 -- # size=4096 00:10:59.278 14:12:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:59.278 14:12:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:59.278 14:12:51 -- common/autotest_common.sh@887 -- # return 0 00:10:59.278 14:12:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:59.278 14:12:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:10:59.278 14:12:51 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:59.537 14:12:51 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd0", 00:10:59.537 "bdev_name": "Malloc0" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd1", 00:10:59.537 "bdev_name": "Malloc1p0" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd2", 00:10:59.537 "bdev_name": "Malloc1p1" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd3", 00:10:59.537 "bdev_name": "Malloc2p0" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd4", 00:10:59.537 "bdev_name": "Malloc2p1" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd5", 00:10:59.537 "bdev_name": "Malloc2p2" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd6", 00:10:59.537 "bdev_name": "Malloc2p3" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd7", 00:10:59.537 "bdev_name": "Malloc2p4" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd8", 00:10:59.537 "bdev_name": "Malloc2p5" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd9", 00:10:59.537 "bdev_name": "Malloc2p6" 00:10:59.537 }, 00:10:59.537 { 00:10:59.537 "nbd_device": "/dev/nbd10", 00:10:59.537 "bdev_name": "Malloc2p7" 00:10:59.537 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd11", 00:10:59.538 "bdev_name": "TestPT" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd12", 00:10:59.538 "bdev_name": "raid0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd13", 00:10:59.538 "bdev_name": "concat0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd14", 00:10:59.538 "bdev_name": "raid1" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd15", 00:10:59.538 "bdev_name": "AIO0" 00:10:59.538 } 00:10:59.538 ]' 00:10:59.538 14:12:51 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:59.538 14:12:51 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd0", 00:10:59.538 "bdev_name": "Malloc0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd1", 00:10:59.538 "bdev_name": "Malloc1p0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd2", 00:10:59.538 "bdev_name": "Malloc1p1" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd3", 00:10:59.538 "bdev_name": "Malloc2p0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd4", 00:10:59.538 "bdev_name": "Malloc2p1" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd5", 00:10:59.538 "bdev_name": "Malloc2p2" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd6", 00:10:59.538 "bdev_name": "Malloc2p3" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd7", 00:10:59.538 "bdev_name": "Malloc2p4" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd8", 00:10:59.538 "bdev_name": "Malloc2p5" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd9", 00:10:59.538 "bdev_name": "Malloc2p6" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd10", 00:10:59.538 "bdev_name": "Malloc2p7" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd11", 00:10:59.538 "bdev_name": "TestPT" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd12", 00:10:59.538 "bdev_name": "raid0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd13", 00:10:59.538 "bdev_name": "concat0" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd14", 00:10:59.538 "bdev_name": "raid1" 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "nbd_device": "/dev/nbd15", 00:10:59.538 "bdev_name": "AIO0" 00:10:59.538 } 00:10:59.538 ]' 00:10:59.538 14:12:51 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@51 -- # local i 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:59.797 14:12:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@41 -- # break 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.055 14:12:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@41 -- # break 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.055 14:12:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@41 -- # break 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.314 14:12:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@41 -- # break 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.573 14:12:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@41 -- # break 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@45 -- # return 0 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:00.832 14:12:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.091 14:12:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.349 14:12:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.608 14:12:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@41 -- # break 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@45 -- # return 0 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:01.867 14:12:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@41 -- # break 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.125 14:12:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@41 -- # break 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@41 -- # break 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.384 14:12:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@41 -- # break 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.642 14:12:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@41 -- # break 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.901 14:12:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@41 -- # break 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:03.160 14:12:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@41 -- # break 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.418 14:12:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@65 -- # true 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@65 -- # count=0 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@122 -- # count=0 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@127 -- # return 0 00:11:03.677 14:12:55 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@12 -- # local i 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:03.677 14:12:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:03.936 /dev/nbd0 00:11:03.936 14:12:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:03.936 14:12:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:03.936 14:12:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:03.936 14:12:55 -- common/autotest_common.sh@867 -- # local i 00:11:03.936 14:12:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:03.936 14:12:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:03.936 14:12:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:03.936 14:12:55 -- common/autotest_common.sh@871 -- # break 00:11:03.936 14:12:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:03.936 14:12:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:03.936 14:12:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:03.936 1+0 records in 00:11:03.936 1+0 records out 00:11:03.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570745 s, 7.2 MB/s 00:11:03.936 14:12:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.936 14:12:55 -- common/autotest_common.sh@884 -- # size=4096 00:11:03.936 14:12:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:03.936 14:12:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:03.936 14:12:55 -- common/autotest_common.sh@887 -- # return 0 00:11:03.936 14:12:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:03.936 14:12:55 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:03.936 14:12:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:04.194 /dev/nbd1 00:11:04.194 14:12:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:04.194 14:12:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:04.194 14:12:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:04.194 14:12:56 -- common/autotest_common.sh@867 -- # local i 00:11:04.194 14:12:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:04.194 14:12:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:04.194 14:12:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:04.194 14:12:56 -- common/autotest_common.sh@871 -- # break 00:11:04.194 14:12:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:04.194 14:12:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:04.194 14:12:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:04.194 1+0 records in 00:11:04.194 1+0 records out 00:11:04.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256272 s, 16.0 MB/s 00:11:04.194 14:12:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.194 14:12:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:04.194 14:12:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.194 14:12:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:04.194 14:12:56 -- common/autotest_common.sh@887 -- # return 0 00:11:04.194 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.194 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:04.194 14:12:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:04.453 /dev/nbd10 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:04.453 14:12:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:04.453 14:12:56 -- common/autotest_common.sh@867 -- # local i 00:11:04.453 14:12:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:04.453 14:12:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:04.453 14:12:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:04.453 14:12:56 -- common/autotest_common.sh@871 -- # break 00:11:04.453 14:12:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:04.453 14:12:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:04.453 14:12:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:04.453 1+0 records in 00:11:04.453 1+0 records out 00:11:04.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333855 s, 12.3 MB/s 00:11:04.453 14:12:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.453 14:12:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:04.453 14:12:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.453 14:12:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:04.453 14:12:56 -- common/autotest_common.sh@887 -- # return 0 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:04.453 /dev/nbd11 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:04.453 14:12:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:04.711 14:12:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:04.711 14:12:56 -- common/autotest_common.sh@867 -- # local i 00:11:04.711 14:12:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:04.711 14:12:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:04.711 14:12:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:04.711 14:12:56 -- common/autotest_common.sh@871 -- # break 00:11:04.711 14:12:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:04.711 14:12:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:04.711 14:12:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:04.711 1+0 records in 00:11:04.711 1+0 records out 00:11:04.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261233 s, 15.7 MB/s 00:11:04.711 14:12:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.711 14:12:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:04.711 14:12:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.711 14:12:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:04.711 14:12:56 -- common/autotest_common.sh@887 -- # return 0 00:11:04.711 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.711 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:04.711 14:12:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:04.970 /dev/nbd12 00:11:04.970 14:12:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:04.970 14:12:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:04.970 14:12:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:04.970 14:12:56 -- common/autotest_common.sh@867 -- # local i 00:11:04.970 14:12:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:04.970 14:12:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:04.970 14:12:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:04.970 14:12:56 -- common/autotest_common.sh@871 -- # break 00:11:04.970 14:12:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:04.970 14:12:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:04.970 14:12:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:04.970 1+0 records in 00:11:04.970 1+0 records out 00:11:04.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443942 s, 9.2 MB/s 00:11:04.970 14:12:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.970 14:12:56 -- common/autotest_common.sh@884 -- # size=4096 00:11:04.970 14:12:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:04.970 14:12:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:04.970 14:12:56 -- common/autotest_common.sh@887 -- # return 0 00:11:04.970 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:04.970 14:12:56 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:04.970 14:12:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:05.228 /dev/nbd13 00:11:05.228 14:12:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:05.228 14:12:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:05.228 14:12:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:05.228 14:12:57 -- common/autotest_common.sh@867 -- # local i 00:11:05.228 14:12:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:05.228 14:12:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:05.228 14:12:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:05.228 14:12:57 -- common/autotest_common.sh@871 -- # break 00:11:05.228 14:12:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:05.228 14:12:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:05.228 14:12:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:05.228 1+0 records in 00:11:05.228 1+0 records out 00:11:05.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463802 s, 8.8 MB/s 00:11:05.228 14:12:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.228 14:12:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:05.228 14:12:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.228 14:12:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:05.228 14:12:57 -- common/autotest_common.sh@887 -- # return 0 00:11:05.228 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.228 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:05.228 14:12:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:05.487 /dev/nbd14 00:11:05.487 14:12:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:05.487 14:12:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:05.487 14:12:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:05.487 14:12:57 -- common/autotest_common.sh@867 -- # local i 00:11:05.487 14:12:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:05.487 14:12:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:05.487 14:12:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:05.487 14:12:57 -- common/autotest_common.sh@871 -- # break 00:11:05.487 14:12:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:05.487 14:12:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:05.487 14:12:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:05.487 1+0 records in 00:11:05.487 1+0 records out 00:11:05.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361639 s, 11.3 MB/s 00:11:05.487 14:12:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.487 14:12:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:05.487 14:12:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.487 14:12:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:05.487 14:12:57 -- common/autotest_common.sh@887 -- # return 0 00:11:05.487 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.487 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:05.487 14:12:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:05.745 /dev/nbd15 00:11:05.745 14:12:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:05.745 14:12:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:05.745 14:12:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:05.745 14:12:57 -- common/autotest_common.sh@867 -- # local i 00:11:05.745 14:12:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:05.745 14:12:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:05.745 14:12:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:05.745 14:12:57 -- common/autotest_common.sh@871 -- # break 00:11:05.745 14:12:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:05.745 14:12:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:05.745 14:12:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:05.745 1+0 records in 00:11:05.745 1+0 records out 00:11:05.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437751 s, 9.4 MB/s 00:11:05.745 14:12:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.745 14:12:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:05.745 14:12:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:05.745 14:12:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:05.745 14:12:57 -- common/autotest_common.sh@887 -- # return 0 00:11:05.745 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:05.745 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:05.745 14:12:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:06.004 /dev/nbd2 00:11:06.004 14:12:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:06.004 14:12:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:06.004 14:12:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:06.004 14:12:57 -- common/autotest_common.sh@867 -- # local i 00:11:06.004 14:12:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:06.004 14:12:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:06.004 14:12:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:06.004 14:12:57 -- common/autotest_common.sh@871 -- # break 00:11:06.004 14:12:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:06.004 14:12:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:06.004 14:12:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.004 1+0 records in 00:11:06.004 1+0 records out 00:11:06.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381298 s, 10.7 MB/s 00:11:06.004 14:12:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.004 14:12:57 -- common/autotest_common.sh@884 -- # size=4096 00:11:06.004 14:12:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.004 14:12:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:06.004 14:12:57 -- common/autotest_common.sh@887 -- # return 0 00:11:06.004 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.004 14:12:57 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:06.004 14:12:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:06.263 /dev/nbd3 00:11:06.263 14:12:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:06.263 14:12:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:06.263 14:12:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:06.263 14:12:58 -- common/autotest_common.sh@867 -- # local i 00:11:06.263 14:12:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:06.263 14:12:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:06.263 14:12:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:06.263 14:12:58 -- common/autotest_common.sh@871 -- # break 00:11:06.263 14:12:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:06.263 14:12:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:06.263 14:12:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.263 1+0 records in 00:11:06.263 1+0 records out 00:11:06.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485256 s, 8.4 MB/s 00:11:06.263 14:12:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.263 14:12:58 -- common/autotest_common.sh@884 -- # size=4096 00:11:06.263 14:12:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.263 14:12:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:06.263 14:12:58 -- common/autotest_common.sh@887 -- # return 0 00:11:06.263 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.263 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:06.263 14:12:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:06.522 /dev/nbd4 00:11:06.522 14:12:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:06.522 14:12:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:06.522 14:12:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:06.522 14:12:58 -- common/autotest_common.sh@867 -- # local i 00:11:06.522 14:12:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:06.522 14:12:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:06.522 14:12:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:06.522 14:12:58 -- common/autotest_common.sh@871 -- # break 00:11:06.522 14:12:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:06.522 14:12:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:06.522 14:12:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.522 1+0 records in 00:11:06.522 1+0 records out 00:11:06.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599671 s, 6.8 MB/s 00:11:06.522 14:12:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.522 14:12:58 -- common/autotest_common.sh@884 -- # size=4096 00:11:06.522 14:12:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.522 14:12:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:06.522 14:12:58 -- common/autotest_common.sh@887 -- # return 0 00:11:06.522 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.522 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:06.522 14:12:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:06.780 /dev/nbd5 00:11:06.781 14:12:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:06.781 14:12:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:06.781 14:12:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:06.781 14:12:58 -- common/autotest_common.sh@867 -- # local i 00:11:06.781 14:12:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:06.781 14:12:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:06.781 14:12:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:06.781 14:12:58 -- common/autotest_common.sh@871 -- # break 00:11:06.781 14:12:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:06.781 14:12:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:06.781 14:12:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.781 1+0 records in 00:11:06.781 1+0 records out 00:11:06.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603195 s, 6.8 MB/s 00:11:06.781 14:12:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.781 14:12:58 -- common/autotest_common.sh@884 -- # size=4096 00:11:06.781 14:12:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:06.781 14:12:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:06.781 14:12:58 -- common/autotest_common.sh@887 -- # return 0 00:11:06.781 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:06.781 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:06.781 14:12:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:07.039 /dev/nbd6 00:11:07.039 14:12:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:07.039 14:12:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:07.039 14:12:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:07.039 14:12:58 -- common/autotest_common.sh@867 -- # local i 00:11:07.039 14:12:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:07.039 14:12:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:07.039 14:12:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:07.039 14:12:58 -- common/autotest_common.sh@871 -- # break 00:11:07.039 14:12:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:07.039 14:12:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:07.039 14:12:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.039 1+0 records in 00:11:07.039 1+0 records out 00:11:07.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513091 s, 8.0 MB/s 00:11:07.039 14:12:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.039 14:12:58 -- common/autotest_common.sh@884 -- # size=4096 00:11:07.039 14:12:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.039 14:12:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:07.039 14:12:58 -- common/autotest_common.sh@887 -- # return 0 00:11:07.039 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.039 14:12:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:07.039 14:12:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:07.301 /dev/nbd7 00:11:07.301 14:12:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:07.301 14:12:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:07.301 14:12:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:07.301 14:12:59 -- common/autotest_common.sh@867 -- # local i 00:11:07.301 14:12:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:07.301 14:12:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:07.301 14:12:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:07.301 14:12:59 -- common/autotest_common.sh@871 -- # break 00:11:07.301 14:12:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:07.301 14:12:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:07.301 14:12:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.301 1+0 records in 00:11:07.301 1+0 records out 00:11:07.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724058 s, 5.7 MB/s 00:11:07.301 14:12:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.301 14:12:59 -- common/autotest_common.sh@884 -- # size=4096 00:11:07.301 14:12:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.301 14:12:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:07.301 14:12:59 -- common/autotest_common.sh@887 -- # return 0 00:11:07.301 14:12:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.301 14:12:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:07.301 14:12:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:07.564 /dev/nbd8 00:11:07.564 14:12:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:07.564 14:12:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:07.564 14:12:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:07.564 14:12:59 -- common/autotest_common.sh@867 -- # local i 00:11:07.564 14:12:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:07.564 14:12:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:07.564 14:12:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:07.564 14:12:59 -- common/autotest_common.sh@871 -- # break 00:11:07.564 14:12:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:07.564 14:12:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:07.564 14:12:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.564 1+0 records in 00:11:07.564 1+0 records out 00:11:07.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102627 s, 4.0 MB/s 00:11:07.564 14:12:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.564 14:12:59 -- common/autotest_common.sh@884 -- # size=4096 00:11:07.564 14:12:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.564 14:12:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:07.564 14:12:59 -- common/autotest_common.sh@887 -- # return 0 00:11:07.564 14:12:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:07.564 14:12:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:07.564 14:12:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:07.823 /dev/nbd9 00:11:07.823 14:12:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:07.823 14:12:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:07.823 14:12:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:07.823 14:12:59 -- common/autotest_common.sh@867 -- # local i 00:11:07.823 14:12:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:07.823 14:12:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:07.823 14:12:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:07.823 14:12:59 -- common/autotest_common.sh@871 -- # break 00:11:07.823 14:12:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:07.823 14:12:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:07.823 14:12:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.823 1+0 records in 00:11:07.823 1+0 records out 00:11:07.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108921 s, 3.8 MB/s 00:11:08.081 14:12:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.081 14:12:59 -- common/autotest_common.sh@884 -- # size=4096 00:11:08.081 14:12:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.081 14:12:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:08.081 14:12:59 -- common/autotest_common.sh@887 -- # return 0 00:11:08.081 14:12:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:08.081 14:12:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:08.081 14:12:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:08.081 14:12:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:08.081 14:12:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:08.340 14:13:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd0", 00:11:08.340 "bdev_name": "Malloc0" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd1", 00:11:08.340 "bdev_name": "Malloc1p0" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd10", 00:11:08.340 "bdev_name": "Malloc1p1" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd11", 00:11:08.340 "bdev_name": "Malloc2p0" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd12", 00:11:08.340 "bdev_name": "Malloc2p1" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd13", 00:11:08.340 "bdev_name": "Malloc2p2" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd14", 00:11:08.340 "bdev_name": "Malloc2p3" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd15", 00:11:08.340 "bdev_name": "Malloc2p4" 00:11:08.340 }, 00:11:08.340 { 00:11:08.340 "nbd_device": "/dev/nbd2", 00:11:08.340 "bdev_name": "Malloc2p5" 00:11:08.340 }, 00:11:08.340 { 00:11:08.341 "nbd_device": "/dev/nbd3", 00:11:08.341 "bdev_name": "Malloc2p6" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd4", 00:11:08.341 "bdev_name": "Malloc2p7" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd5", 00:11:08.341 "bdev_name": "TestPT" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd6", 00:11:08.341 "bdev_name": "raid0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd7", 00:11:08.341 "bdev_name": "concat0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd8", 00:11:08.341 "bdev_name": "raid1" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd9", 00:11:08.341 "bdev_name": "AIO0" 00:11:08.341 } 00:11:08.341 ]' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd0", 00:11:08.341 "bdev_name": "Malloc0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd1", 00:11:08.341 "bdev_name": "Malloc1p0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd10", 00:11:08.341 "bdev_name": "Malloc1p1" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd11", 00:11:08.341 "bdev_name": "Malloc2p0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd12", 00:11:08.341 "bdev_name": "Malloc2p1" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd13", 00:11:08.341 "bdev_name": "Malloc2p2" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd14", 00:11:08.341 "bdev_name": "Malloc2p3" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd15", 00:11:08.341 "bdev_name": "Malloc2p4" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd2", 00:11:08.341 "bdev_name": "Malloc2p5" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd3", 00:11:08.341 "bdev_name": "Malloc2p6" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd4", 00:11:08.341 "bdev_name": "Malloc2p7" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd5", 00:11:08.341 "bdev_name": "TestPT" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd6", 00:11:08.341 "bdev_name": "raid0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd7", 00:11:08.341 "bdev_name": "concat0" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd8", 00:11:08.341 "bdev_name": "raid1" 00:11:08.341 }, 00:11:08.341 { 00:11:08.341 "nbd_device": "/dev/nbd9", 00:11:08.341 "bdev_name": "AIO0" 00:11:08.341 } 00:11:08.341 ]' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:08.341 /dev/nbd1 00:11:08.341 /dev/nbd10 00:11:08.341 /dev/nbd11 00:11:08.341 /dev/nbd12 00:11:08.341 /dev/nbd13 00:11:08.341 /dev/nbd14 00:11:08.341 /dev/nbd15 00:11:08.341 /dev/nbd2 00:11:08.341 /dev/nbd3 00:11:08.341 /dev/nbd4 00:11:08.341 /dev/nbd5 00:11:08.341 /dev/nbd6 00:11:08.341 /dev/nbd7 00:11:08.341 /dev/nbd8 00:11:08.341 /dev/nbd9' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:08.341 /dev/nbd1 00:11:08.341 /dev/nbd10 00:11:08.341 /dev/nbd11 00:11:08.341 /dev/nbd12 00:11:08.341 /dev/nbd13 00:11:08.341 /dev/nbd14 00:11:08.341 /dev/nbd15 00:11:08.341 /dev/nbd2 00:11:08.341 /dev/nbd3 00:11:08.341 /dev/nbd4 00:11:08.341 /dev/nbd5 00:11:08.341 /dev/nbd6 00:11:08.341 /dev/nbd7 00:11:08.341 /dev/nbd8 00:11:08.341 /dev/nbd9' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@65 -- # count=16 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@95 -- # count=16 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:08.341 256+0 records in 00:11:08.341 256+0 records out 00:11:08.341 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109533 s, 95.7 MB/s 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:08.341 14:13:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:08.600 256+0 records in 00:11:08.600 256+0 records out 00:11:08.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170236 s, 6.2 MB/s 00:11:08.600 14:13:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:08.600 14:13:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:08.600 256+0 records in 00:11:08.600 256+0 records out 00:11:08.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147701 s, 7.1 MB/s 00:11:08.600 14:13:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:08.600 14:13:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:08.858 256+0 records in 00:11:08.858 256+0 records out 00:11:08.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121013 s, 8.7 MB/s 00:11:08.858 14:13:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:08.858 14:13:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:08.858 256+0 records in 00:11:08.858 256+0 records out 00:11:08.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119332 s, 8.8 MB/s 00:11:08.858 14:13:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:08.858 14:13:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:09.117 256+0 records in 00:11:09.117 256+0 records out 00:11:09.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118434 s, 8.9 MB/s 00:11:09.117 14:13:00 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.117 14:13:00 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:09.117 256+0 records in 00:11:09.117 256+0 records out 00:11:09.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119907 s, 8.7 MB/s 00:11:09.117 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.117 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:09.117 256+0 records in 00:11:09.117 256+0 records out 00:11:09.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118094 s, 8.9 MB/s 00:11:09.117 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.117 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:09.376 256+0 records in 00:11:09.376 256+0 records out 00:11:09.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11747 s, 8.9 MB/s 00:11:09.376 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.376 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:09.376 256+0 records in 00:11:09.376 256+0 records out 00:11:09.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1183 s, 8.9 MB/s 00:11:09.376 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.376 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:09.635 256+0 records in 00:11:09.635 256+0 records out 00:11:09.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117623 s, 8.9 MB/s 00:11:09.635 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.635 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:09.635 256+0 records in 00:11:09.635 256+0 records out 00:11:09.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117672 s, 8.9 MB/s 00:11:09.635 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.635 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:09.894 256+0 records in 00:11:09.894 256+0 records out 00:11:09.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117966 s, 8.9 MB/s 00:11:09.894 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.894 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:09.894 256+0 records in 00:11:09.894 256+0 records out 00:11:09.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118336 s, 8.9 MB/s 00:11:09.894 14:13:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:09.894 14:13:01 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:10.153 256+0 records in 00:11:10.153 256+0 records out 00:11:10.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121179 s, 8.7 MB/s 00:11:10.153 14:13:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:10.153 14:13:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:10.153 256+0 records in 00:11:10.153 256+0 records out 00:11:10.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121893 s, 8.6 MB/s 00:11:10.153 14:13:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:10.153 14:13:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:10.411 256+0 records in 00:11:10.412 256+0 records out 00:11:10.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216251 s, 4.8 MB/s 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:10.412 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@51 -- # local i 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.671 14:13:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@41 -- # break 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.930 14:13:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@41 -- # break 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.189 14:13:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@41 -- # break 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.447 14:13:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@41 -- # break 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.722 14:13:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@41 -- # break 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.997 14:13:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:11.997 14:13:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:11.997 14:13:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:11.997 14:13:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:11.997 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.997 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.997 14:13:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@41 -- # break 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@41 -- # break 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.256 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@41 -- # break 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.515 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@41 -- # break 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.774 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@41 -- # break 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.032 14:13:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.291 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.549 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@41 -- # break 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.807 14:13:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:14.066 14:13:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:14.066 14:13:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:14.066 14:13:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:14.066 14:13:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.066 14:13:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.066 14:13:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@41 -- # break 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@41 -- # break 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.326 14:13:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@65 -- # true 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@65 -- # count=0 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@104 -- # count=0 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@109 -- # return 0 00:11:14.586 14:13:06 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:14.586 14:13:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:14.845 malloc_lvol_verify 00:11:14.845 14:13:06 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:15.105 fbe44000-113c-4e6a-a258-4dacfe62dc68 00:11:15.105 14:13:07 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:15.364 90d192bb-2943-469b-b25f-bc3eb9fb7d9d 00:11:15.364 14:13:07 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:15.623 /dev/nbd0 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:15.623 mke2fs 1.46.5 (30-Dec-2021) 00:11:15.623 00:11:15.623 Filesystem too small for a journal 00:11:15.623 Discarding device blocks: 0/1024 done 00:11:15.623 Creating filesystem with 1024 4k blocks and 1024 inodes 00:11:15.623 00:11:15.623 Allocating group tables: 0/1 done 00:11:15.623 Writing inode tables: 0/1 done 00:11:15.623 Writing superblocks and filesystem accounting information: 0/1 done 00:11:15.623 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@51 -- # local i 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.623 14:13:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@41 -- # break 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:15.882 14:13:07 -- bdev/nbd_common.sh@147 -- # return 0 00:11:15.882 14:13:07 -- bdev/blockdev.sh@324 -- # killprocess 119387 00:11:15.882 14:13:07 -- common/autotest_common.sh@936 -- # '[' -z 119387 ']' 00:11:15.882 14:13:07 -- common/autotest_common.sh@940 -- # kill -0 119387 00:11:15.882 14:13:07 -- common/autotest_common.sh@941 -- # uname 00:11:15.882 14:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.882 14:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119387 00:11:15.882 14:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:15.882 14:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:15.882 14:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119387' 00:11:15.882 killing process with pid 119387 00:11:15.882 14:13:07 -- common/autotest_common.sh@955 -- # kill 119387 00:11:15.882 14:13:07 -- common/autotest_common.sh@960 -- # wait 119387 00:11:16.450 ************************************ 00:11:16.450 END TEST bdev_nbd 00:11:16.450 ************************************ 00:11:16.450 14:13:08 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:16.450 00:11:16.450 real 0m22.295s 00:11:16.450 user 0m31.650s 00:11:16.450 sys 0m8.067s 00:11:16.450 14:13:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:16.450 14:13:08 -- common/autotest_common.sh@10 -- # set +x 00:11:16.450 14:13:08 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:16.450 14:13:08 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.450 14:13:08 -- common/autotest_common.sh@10 -- # set +x 00:11:16.450 ************************************ 00:11:16.450 START TEST bdev_fio 00:11:16.450 ************************************ 00:11:16.450 14:13:08 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@329 -- # local env_context 00:11:16.450 14:13:08 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:16.450 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:16.450 14:13:08 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:16.450 14:13:08 -- bdev/blockdev.sh@337 -- # echo '' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:16.450 14:13:08 -- bdev/blockdev.sh@337 -- # env_context= 00:11:16.450 14:13:08 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:16.450 14:13:08 -- common/autotest_common.sh@1270 -- # local workload=verify 00:11:16.450 14:13:08 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:11:16.450 14:13:08 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:16.450 14:13:08 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:16.450 14:13:08 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:16.450 14:13:08 -- common/autotest_common.sh@1290 -- # cat 00:11:16.450 14:13:08 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1303 -- # cat 00:11:16.450 14:13:08 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:11:16.450 14:13:08 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:11:16.450 14:13:08 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:16.450 14:13:08 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.450 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:11:16.450 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:11:16.450 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.451 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:11:16.451 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:11:16.451 14:13:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:16.451 14:13:08 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:11:16.451 14:13:08 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:11:16.451 14:13:08 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:16.451 14:13:08 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:16.451 14:13:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:16.451 14:13:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.451 14:13:08 -- common/autotest_common.sh@10 -- # set +x 00:11:16.451 ************************************ 00:11:16.451 START TEST bdev_fio_rw_verify 00:11:16.451 ************************************ 00:11:16.451 14:13:08 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:16.451 14:13:08 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:16.451 14:13:08 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:16.451 14:13:08 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:16.451 14:13:08 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:16.451 14:13:08 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:16.451 14:13:08 -- common/autotest_common.sh@1330 -- # shift 00:11:16.451 14:13:08 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:16.451 14:13:08 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:16.451 14:13:08 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:16.451 14:13:08 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:16.451 14:13:08 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:16.451 14:13:08 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:11:16.451 14:13:08 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:11:16.451 14:13:08 -- common/autotest_common.sh@1336 -- # break 00:11:16.451 14:13:08 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:16.451 14:13:08 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:16.710 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:16.710 fio-3.35 00:11:16.710 Starting 16 threads 00:11:28.913 00:11:28.913 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=120521: Mon Nov 18 14:13:19 2024 00:11:28.913 read: IOPS=74.2k, BW=290MiB/s (304MB/s)(2898MiB/10006msec) 00:11:28.913 slat (nsec): min=1955, max=40043k, avg=37352.11, stdev=435994.62 00:11:28.913 clat (usec): min=9, max=45559, avg=304.15, stdev=1298.28 00:11:28.913 lat (usec): min=26, max=45573, avg=341.51, stdev=1368.82 00:11:28.913 clat percentiles (usec): 00:11:28.913 | 50.000th=[ 182], 99.000th=[ 840], 99.900th=[16319], 99.990th=[28443], 00:11:28.913 | 99.999th=[43779] 00:11:28.913 write: IOPS=118k, BW=462MiB/s (484MB/s)(4559MiB/9870msec); 0 zone resets 00:11:28.913 slat (usec): min=9, max=64023, avg=68.79, stdev=679.60 00:11:28.913 clat (usec): min=9, max=64282, avg=399.49, stdev=1590.02 00:11:28.913 lat (usec): min=36, max=64320, avg=468.28, stdev=1729.83 00:11:28.913 clat percentiles (usec): 00:11:28.913 | 50.000th=[ 229], 99.000th=[ 6194], 99.900th=[21365], 99.990th=[40109], 00:11:28.913 | 99.999th=[50594] 00:11:28.913 bw ( KiB/s): min=288504, max=728638, per=98.30%, avg=464949.47, stdev=8525.29, samples=304 00:11:28.913 iops : min=72126, max=182159, avg=116237.26, stdev=2131.32, samples=304 00:11:28.913 lat (usec) : 10=0.01%, 20=0.01%, 50=0.66%, 100=9.68%, 250=54.58% 00:11:28.913 lat (usec) : 500=31.41%, 750=1.90%, 1000=0.39% 00:11:28.913 lat (msec) : 2=0.24%, 4=0.12%, 10=0.32%, 20=0.60%, 50=0.10% 00:11:28.913 lat (msec) : 100=0.01% 00:11:28.913 cpu : usr=56.17%, sys=1.93%, ctx=227165, majf=2, minf=91128 00:11:28.913 IO depths : 1=11.4%, 2=23.6%, 4=51.9%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.913 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.913 issued rwts: total=741983,1167119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.913 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:28.913 00:11:28.913 Run status group 0 (all jobs): 00:11:28.913 READ: bw=290MiB/s (304MB/s), 290MiB/s-290MiB/s (304MB/s-304MB/s), io=2898MiB (3039MB), run=10006-10006msec 00:11:28.913 WRITE: bw=462MiB/s (484MB/s), 462MiB/s-462MiB/s (484MB/s-484MB/s), io=4559MiB (4781MB), run=9870-9870msec 00:11:28.913 ----------------------------------------------------- 00:11:28.913 Suppressions used: 00:11:28.913 count bytes template 00:11:28.913 16 140 /usr/src/fio/parse.c 00:11:28.913 9958 955968 /usr/src/fio/iolog.c 00:11:28.913 1 904 libcrypto.so 00:11:28.913 ----------------------------------------------------- 00:11:28.913 00:11:28.913 ************************************ 00:11:28.913 END TEST bdev_fio_rw_verify 00:11:28.913 ************************************ 00:11:28.913 00:11:28.913 real 0m11.769s 00:11:28.913 user 1m32.688s 00:11:28.913 sys 0m3.830s 00:11:28.913 14:13:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:28.913 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.913 14:13:20 -- bdev/blockdev.sh@348 -- # rm -f 00:11:28.913 14:13:20 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:28.913 14:13:20 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:28.913 14:13:20 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:28.913 14:13:20 -- common/autotest_common.sh@1270 -- # local workload=trim 00:11:28.913 14:13:20 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:11:28.913 14:13:20 -- common/autotest_common.sh@1272 -- # local env_context= 00:11:28.913 14:13:20 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:11:28.913 14:13:20 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:28.913 14:13:20 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:11:28.913 14:13:20 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:11:28.913 14:13:20 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:28.913 14:13:20 -- common/autotest_common.sh@1290 -- # cat 00:11:28.913 14:13:20 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:11:28.913 14:13:20 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:11:28.913 14:13:20 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:11:28.913 14:13:20 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:28.914 14:13:20 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d0881076-e730-4ab5-86ef-f6736d72fafa"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d0881076-e730-4ab5-86ef-f6736d72fafa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4a949022-0a75-58bb-8e43-7acb6ddf8d92"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4a949022-0a75-58bb-8e43-7acb6ddf8d92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "468eb969-8265-5dbf-8b9d-2337f0c9e7ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "468eb969-8265-5dbf-8b9d-2337f0c9e7ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "fa75de9e-ea58-5364-92c5-27f680682831"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fa75de9e-ea58-5364-92c5-27f680682831",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b154af71-a5f7-5f88-9e23-9b05e28fcda8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b154af71-a5f7-5f88-9e23-9b05e28fcda8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8589296d-933a-56cb-902c-cb4e09e7ee35"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8589296d-933a-56cb-902c-cb4e09e7ee35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1722a258-8146-5b5e-b201-240c8e2389e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1722a258-8146-5b5e-b201-240c8e2389e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7438e5f6-e854-59f1-aa41-7089065714a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7438e5f6-e854-59f1-aa41-7089065714a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "06b8c853-5d7e-584a-9469-ad74781f7370"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06b8c853-5d7e-584a-9469-ad74781f7370",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d909c51f-87ed-5d36-bad5-d7fb007de6ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d909c51f-87ed-5d36-bad5-d7fb007de6ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a54465c1-dbfa-52b1-b5d9-37992ed826eb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a54465c1-dbfa-52b1-b5d9-37992ed826eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "013e915e-2070-5329-b1e3-369a1b348570"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "013e915e-2070-5329-b1e3-369a1b348570",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "910a0f46-0fef-4fad-a292-ba8234261b8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "910a0f46-0fef-4fad-a292-ba8234261b8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "910a0f46-0fef-4fad-a292-ba8234261b8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ec7e492c-9e34-43dc-aea3-305d2c25faa6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "29ba7dfd-5684-4327-a5f5-a332b0eeb681",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "24114fa6-66f3-4845-8ee8-8f9422d31325"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "24114fa6-66f3-4845-8ee8-8f9422d31325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "24114fa6-66f3-4845-8ee8-8f9422d31325",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "20e014b1-3f5d-4e83-9dbc-d6e69d1ac4ee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "596a0ad9-4551-47de-95cb-2b6fc97d0d40",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "56ab7422-dcd0-414c-b5ff-f2adabe783f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56ab7422-dcd0-414c-b5ff-f2adabe783f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "56ab7422-dcd0-414c-b5ff-f2adabe783f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "8370a061-2a43-4ff0-983d-502715fbb232",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "caeb10f2-192a-4a4b-9379-80d7b332f90f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a9194642-68cf-4257-bac1-bd188dd963b4"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a9194642-68cf-4257-bac1-bd188dd963b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:28.914 14:13:20 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:11:28.914 Malloc1p0 00:11:28.914 Malloc1p1 00:11:28.914 Malloc2p0 00:11:28.914 Malloc2p1 00:11:28.914 Malloc2p2 00:11:28.914 Malloc2p3 00:11:28.914 Malloc2p4 00:11:28.914 Malloc2p5 00:11:28.914 Malloc2p6 00:11:28.914 Malloc2p7 00:11:28.914 TestPT 00:11:28.914 raid0 00:11:28.914 concat0 ]] 00:11:28.914 14:13:20 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:28.915 14:13:20 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d0881076-e730-4ab5-86ef-f6736d72fafa"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d0881076-e730-4ab5-86ef-f6736d72fafa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4a949022-0a75-58bb-8e43-7acb6ddf8d92"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4a949022-0a75-58bb-8e43-7acb6ddf8d92",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "468eb969-8265-5dbf-8b9d-2337f0c9e7ce"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "468eb969-8265-5dbf-8b9d-2337f0c9e7ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "fa75de9e-ea58-5364-92c5-27f680682831"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fa75de9e-ea58-5364-92c5-27f680682831",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b154af71-a5f7-5f88-9e23-9b05e28fcda8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b154af71-a5f7-5f88-9e23-9b05e28fcda8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8589296d-933a-56cb-902c-cb4e09e7ee35"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8589296d-933a-56cb-902c-cb4e09e7ee35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1722a258-8146-5b5e-b201-240c8e2389e2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1722a258-8146-5b5e-b201-240c8e2389e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7438e5f6-e854-59f1-aa41-7089065714a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7438e5f6-e854-59f1-aa41-7089065714a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "06b8c853-5d7e-584a-9469-ad74781f7370"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06b8c853-5d7e-584a-9469-ad74781f7370",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d909c51f-87ed-5d36-bad5-d7fb007de6ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d909c51f-87ed-5d36-bad5-d7fb007de6ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a54465c1-dbfa-52b1-b5d9-37992ed826eb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a54465c1-dbfa-52b1-b5d9-37992ed826eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "013e915e-2070-5329-b1e3-369a1b348570"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "013e915e-2070-5329-b1e3-369a1b348570",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "910a0f46-0fef-4fad-a292-ba8234261b8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "910a0f46-0fef-4fad-a292-ba8234261b8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "910a0f46-0fef-4fad-a292-ba8234261b8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "ec7e492c-9e34-43dc-aea3-305d2c25faa6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "29ba7dfd-5684-4327-a5f5-a332b0eeb681",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "24114fa6-66f3-4845-8ee8-8f9422d31325"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "24114fa6-66f3-4845-8ee8-8f9422d31325",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "24114fa6-66f3-4845-8ee8-8f9422d31325",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "20e014b1-3f5d-4e83-9dbc-d6e69d1ac4ee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "596a0ad9-4551-47de-95cb-2b6fc97d0d40",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "56ab7422-dcd0-414c-b5ff-f2adabe783f4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "56ab7422-dcd0-414c-b5ff-f2adabe783f4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "56ab7422-dcd0-414c-b5ff-f2adabe783f4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "8370a061-2a43-4ff0-983d-502715fbb232",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "caeb10f2-192a-4a4b-9379-80d7b332f90f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "a9194642-68cf-4257-bac1-bd188dd963b4"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "a9194642-68cf-4257-bac1-bd188dd963b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:28.915 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.915 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:11:28.915 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:11:28.915 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.915 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:11:28.915 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:11:28.915 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.915 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:11:28.915 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:11:28.915 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.915 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:11:28.915 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:11:28.916 14:13:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:28.916 14:13:20 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:11:28.916 14:13:20 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:11:28.916 14:13:20 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:28.916 14:13:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:28.916 14:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.916 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.916 ************************************ 00:11:28.916 START TEST bdev_fio_trim 00:11:28.916 ************************************ 00:11:28.916 14:13:20 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:28.916 14:13:20 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:28.916 14:13:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:28.916 14:13:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:28.916 14:13:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:28.916 14:13:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:28.916 14:13:20 -- common/autotest_common.sh@1330 -- # shift 00:11:28.916 14:13:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:28.916 14:13:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:28.916 14:13:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:28.916 14:13:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:28.916 14:13:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:28.916 14:13:20 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:11:28.916 14:13:20 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:11:28.916 14:13:20 -- common/autotest_common.sh@1336 -- # break 00:11:28.916 14:13:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:28.916 14:13:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:28.916 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:28.916 fio-3.35 00:11:28.916 Starting 14 threads 00:11:41.120 00:11:41.120 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=120719: Mon Nov 18 14:13:31 2024 00:11:41.120 write: IOPS=152k, BW=595MiB/s (624MB/s)(5957MiB/10009msec); 0 zone resets 00:11:41.120 slat (usec): min=2, max=40049, avg=33.10, stdev=385.77 00:11:41.120 clat (usec): min=10, max=40279, avg=234.31, stdev=1034.61 00:11:41.120 lat (usec): min=27, max=40296, avg=267.41, stdev=1103.52 00:11:41.120 clat percentiles (usec): 00:11:41.120 | 50.000th=[ 155], 99.000th=[ 469], 99.900th=[16188], 99.990th=[20317], 00:11:41.120 | 99.999th=[28181] 00:11:41.120 bw ( KiB/s): min=431892, max=861552, per=100.00%, avg=609739.37, stdev=10402.23, samples=266 00:11:41.120 iops : min=107973, max=215388, avg=152434.79, stdev=2600.56, samples=266 00:11:41.120 trim: IOPS=152k, BW=595MiB/s (624MB/s)(5957MiB/10009msec); 0 zone resets 00:11:41.120 slat (usec): min=4, max=28038, avg=22.84, stdev=321.79 00:11:41.120 clat (usec): min=4, max=40296, avg=248.35, stdev=1052.83 00:11:41.120 lat (usec): min=13, max=40313, avg=271.19, stdev=1100.64 00:11:41.120 clat percentiles (usec): 00:11:41.120 | 50.000th=[ 172], 99.000th=[ 441], 99.900th=[16188], 99.990th=[20317], 00:11:41.120 | 99.999th=[28181] 00:11:41.120 bw ( KiB/s): min=431892, max=861560, per=100.00%, avg=609739.37, stdev=10401.59, samples=266 00:11:41.120 iops : min=107973, max=215390, avg=152434.79, stdev=2600.40, samples=266 00:11:41.120 lat (usec) : 10=0.09%, 20=0.31%, 50=1.16%, 100=12.34%, 250=75.99% 00:11:41.120 lat (usec) : 500=9.51%, 750=0.08%, 1000=0.01% 00:11:41.120 lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%, 20=0.42%, 50=0.02% 00:11:41.120 cpu : usr=69.16%, sys=0.22%, ctx=166959, majf=0, minf=9057 00:11:41.120 IO depths : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:41.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.120 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:41.120 issued rwts: total=0,1524865,1524871,0 short=0,0,0,0 dropped=0,0,0,0 00:11:41.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:41.120 00:11:41.120 Run status group 0 (all jobs): 00:11:41.120 WRITE: bw=595MiB/s (624MB/s), 595MiB/s-595MiB/s (624MB/s-624MB/s), io=5957MiB (6246MB), run=10009-10009msec 00:11:41.120 TRIM: bw=595MiB/s (624MB/s), 595MiB/s-595MiB/s (624MB/s-624MB/s), io=5957MiB (6246MB), run=10009-10009msec 00:11:41.120 ----------------------------------------------------- 00:11:41.120 Suppressions used: 00:11:41.120 count bytes template 00:11:41.120 14 129 /usr/src/fio/parse.c 00:11:41.120 1 904 libcrypto.so 00:11:41.120 ----------------------------------------------------- 00:11:41.120 00:11:41.120 00:11:41.121 real 0m11.700s 00:11:41.121 user 1m39.842s 00:11:41.121 sys 0m0.978s 00:11:41.121 ************************************ 00:11:41.121 END TEST bdev_fio_trim 00:11:41.121 ************************************ 00:11:41.121 14:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.121 14:13:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.121 14:13:32 -- bdev/blockdev.sh@366 -- # rm -f 00:11:41.121 14:13:32 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.121 14:13:32 -- bdev/blockdev.sh@368 -- # popd 00:11:41.121 /home/vagrant/spdk_repo/spdk 00:11:41.121 14:13:32 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:11:41.121 00:11:41.121 real 0m23.779s 00:11:41.121 user 3m12.722s 00:11:41.121 sys 0m4.899s 00:11:41.121 14:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.121 14:13:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.121 ************************************ 00:11:41.121 END TEST bdev_fio 00:11:41.121 ************************************ 00:11:41.121 14:13:32 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:41.121 14:13:32 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:41.121 14:13:32 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:11:41.121 14:13:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.121 14:13:32 -- common/autotest_common.sh@10 -- # set +x 00:11:41.121 ************************************ 00:11:41.121 START TEST bdev_verify 00:11:41.121 ************************************ 00:11:41.121 14:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:41.121 [2024-11-18 14:13:32.226292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:41.121 [2024-11-18 14:13:32.226670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120893 ] 00:11:41.121 [2024-11-18 14:13:32.366744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.121 [2024-11-18 14:13:32.438399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.121 [2024-11-18 14:13:32.438418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.121 [2024-11-18 14:13:32.607920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:41.121 [2024-11-18 14:13:32.608406] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:41.121 [2024-11-18 14:13:32.615819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:41.121 [2024-11-18 14:13:32.616049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:41.121 [2024-11-18 14:13:32.623921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:41.121 [2024-11-18 14:13:32.624154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:41.121 [2024-11-18 14:13:32.624311] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:41.121 [2024-11-18 14:13:32.734709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:41.121 [2024-11-18 14:13:32.735200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.121 [2024-11-18 14:13:32.735394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:41.121 [2024-11-18 14:13:32.735573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.121 [2024-11-18 14:13:32.738794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.121 [2024-11-18 14:13:32.738982] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:41.121 Running I/O for 5 seconds... 00:11:46.390 00:11:46.390 Latency(us) 00:11:46.390 [2024-11-18T14:13:38.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.390 [2024-11-18T14:13:38.464Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.390 Verification LBA range: start 0x0 length 0x1000 00:11:46.390 Malloc0 : 5.19 1398.38 5.46 0.00 0.00 90574.84 1891.61 188743.68 00:11:46.390 [2024-11-18T14:13:38.464Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.390 Verification LBA range: start 0x1000 length 0x1000 00:11:46.390 Malloc0 : 5.23 1213.44 4.74 0.00 0.00 104812.08 2591.65 282162.27 00:11:46.390 [2024-11-18T14:13:38.464Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.390 Verification LBA range: start 0x0 length 0x800 00:11:46.390 Malloc1p0 : 5.19 962.15 3.76 0.00 0.00 131359.68 4974.78 176351.42 00:11:46.390 [2024-11-18T14:13:38.464Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.390 Verification LBA range: start 0x800 length 0x800 00:11:46.390 Malloc1p0 : 5.23 856.01 3.34 0.00 0.00 148377.27 4825.83 170631.91 00:11:46.390 [2024-11-18T14:13:38.464Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.390 Verification LBA range: start 0x0 length 0x800 00:11:46.391 Malloc1p1 : 5.19 961.96 3.76 0.00 0.00 131121.38 5064.15 171585.16 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x800 length 0x800 00:11:46.391 Malloc1p1 : 5.23 855.84 3.34 0.00 0.00 148142.20 4796.04 165865.66 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p0 : 5.19 961.79 3.76 0.00 0.00 130907.84 4915.20 166818.91 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p0 : 5.23 855.65 3.34 0.00 0.00 147938.31 4498.15 162052.65 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p1 : 5.19 961.60 3.76 0.00 0.00 130640.97 5034.36 162052.65 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p1 : 5.24 855.49 3.34 0.00 0.00 147704.86 4527.94 157286.40 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p2 : 5.19 961.41 3.76 0.00 0.00 130436.57 4647.10 158239.65 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p2 : 5.24 855.31 3.34 0.00 0.00 147494.98 4617.31 153473.40 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p3 : 5.22 973.77 3.80 0.00 0.00 129285.34 4885.41 154426.65 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p3 : 5.24 854.27 3.34 0.00 0.00 147315.81 4498.15 149660.39 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p4 : 5.22 973.51 3.80 0.00 0.00 129100.68 4736.47 149660.39 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p4 : 5.24 854.08 3.34 0.00 0.00 147108.11 4587.52 145847.39 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p5 : 5.22 973.27 3.80 0.00 0.00 128897.38 4676.89 145847.39 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p5 : 5.25 853.91 3.34 0.00 0.00 146899.43 4408.79 142034.39 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p6 : 5.22 973.02 3.80 0.00 0.00 128670.21 4825.83 142034.39 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p6 : 5.25 853.71 3.33 0.00 0.00 146649.31 4647.10 138221.38 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x200 00:11:46.391 Malloc2p7 : 5.22 972.77 3.80 0.00 0.00 128498.14 4110.89 139174.63 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x200 length 0x200 00:11:46.391 Malloc2p7 : 5.25 853.54 3.33 0.00 0.00 146426.18 4438.57 135361.63 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x1000 00:11:46.391 TestPT : 5.22 972.52 3.80 0.00 0.00 128250.78 2815.07 127735.62 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x1000 length 0x1000 00:11:46.391 TestPT : 5.25 823.08 3.22 0.00 0.00 151544.00 8043.05 240219.23 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x2000 00:11:46.391 raid0 : 5.22 972.27 3.80 0.00 0.00 127921.99 4974.78 123922.62 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x2000 length 0x2000 00:11:46.391 raid0 : 5.25 853.21 3.33 0.00 0.00 145851.91 4319.42 118203.11 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x2000 00:11:46.391 concat0 : 5.23 972.02 3.80 0.00 0.00 127713.44 4915.20 119156.36 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x2000 length 0x2000 00:11:46.391 concat0 : 5.25 853.02 3.33 0.00 0.00 145597.79 4825.83 115343.36 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x1000 00:11:46.391 raid1 : 5.23 971.77 3.80 0.00 0.00 127516.07 3798.11 113436.86 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x1000 length 0x1000 00:11:46.391 raid1 : 5.25 852.86 3.33 0.00 0.00 145319.12 5481.19 112483.61 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x0 length 0x4e2 00:11:46.391 AIO0 : 5.23 970.98 3.79 0.00 0.00 127311.22 12094.37 106287.48 00:11:46.391 [2024-11-18T14:13:38.465Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.391 Verification LBA range: start 0x4e2 length 0x4e2 00:11:46.391 AIO0 : 5.25 852.50 3.33 0.00 0.00 144936.10 14000.87 112960.23 00:11:46.391 [2024-11-18T14:13:38.465Z] =================================================================================================================== 00:11:46.391 [2024-11-18T14:13:38.465Z] Total : 29929.11 116.91 0.00 0.00 134093.65 1891.61 282162.27 00:11:46.958 ************************************ 00:11:46.958 END TEST bdev_verify 00:11:46.958 ************************************ 00:11:46.958 00:11:46.958 real 0m6.631s 00:11:46.958 user 0m11.479s 00:11:46.958 sys 0m0.683s 00:11:46.958 14:13:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.958 14:13:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.958 14:13:38 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:46.958 14:13:38 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:11:46.958 14:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.958 14:13:38 -- common/autotest_common.sh@10 -- # set +x 00:11:46.958 ************************************ 00:11:46.958 START TEST bdev_verify_big_io 00:11:46.958 ************************************ 00:11:46.958 14:13:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:46.958 [2024-11-18 14:13:38.895871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:46.958 [2024-11-18 14:13:38.896274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121000 ] 00:11:47.216 [2024-11-18 14:13:39.038976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:47.216 [2024-11-18 14:13:39.107724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.216 [2024-11-18 14:13:39.107747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.216 [2024-11-18 14:13:39.276626] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:47.216 [2024-11-18 14:13:39.277113] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:47.216 [2024-11-18 14:13:39.284545] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:47.216 [2024-11-18 14:13:39.284759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:47.474 [2024-11-18 14:13:39.292638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:47.474 [2024-11-18 14:13:39.292866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:47.474 [2024-11-18 14:13:39.293053] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:47.474 [2024-11-18 14:13:39.399689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:47.474 [2024-11-18 14:13:39.400164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.474 [2024-11-18 14:13:39.400419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.474 [2024-11-18 14:13:39.400583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.474 [2024-11-18 14:13:39.403601] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.474 [2024-11-18 14:13:39.403787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:47.733 [2024-11-18 14:13:39.612718] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.614283] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.616405] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.618476] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.619877] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.621947] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.623332] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:47.733 [2024-11-18 14:13:39.625375] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.626764] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.628851] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.630204] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.632313] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.633738] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.635821] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.637939] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.639331] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:11:47.734 [2024-11-18 14:13:39.673274] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:47.734 [2024-11-18 14:13:39.676394] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:11:47.734 Running I/O for 5 seconds... 00:11:54.312 00:11:54.312 Latency(us) 00:11:54.312 [2024-11-18T14:13:46.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x100 00:11:54.312 Malloc0 : 5.42 534.57 33.41 0.00 0.00 235685.92 14656.23 922746.88 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x100 length 0x100 00:11:54.312 Malloc0 : 5.44 416.18 26.01 0.00 0.00 299283.09 21328.99 991380.95 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x80 00:11:54.312 Malloc1p0 : 5.42 408.93 25.56 0.00 0.00 306230.59 26810.18 482344.96 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x80 length 0x80 00:11:54.312 Malloc1p0 : 5.73 139.03 8.69 0.00 0.00 875038.53 43134.60 1784485.70 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x80 00:11:54.312 Malloc1p1 : 5.55 177.79 11.11 0.00 0.00 691499.33 25022.84 1479445.41 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x80 length 0x80 00:11:54.312 Malloc1p1 : 5.74 144.55 9.03 0.00 0.00 837699.42 40036.54 1784485.70 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p0 : 5.42 103.26 6.45 0.00 0.00 296886.40 4647.10 428962.91 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p0 : 5.55 81.28 5.08 0.00 0.00 369252.13 5957.82 537633.51 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p1 : 5.42 103.24 6.45 0.00 0.00 296115.40 4468.36 419430.40 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p1 : 5.55 81.26 5.08 0.00 0.00 368004.32 6523.81 526194.50 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p2 : 5.43 103.21 6.45 0.00 0.00 295334.15 5004.57 411804.39 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p2 : 5.55 81.24 5.08 0.00 0.00 366534.22 6851.49 510942.49 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p3 : 5.43 103.20 6.45 0.00 0.00 294559.90 4944.99 402271.88 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p3 : 5.56 81.18 5.07 0.00 0.00 365100.18 6583.39 499503.48 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p4 : 5.43 103.17 6.45 0.00 0.00 293758.87 4885.41 392739.37 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p4 : 5.56 81.16 5.07 0.00 0.00 363654.69 6613.18 486157.96 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p5 : 5.43 103.16 6.45 0.00 0.00 292948.82 4855.62 385113.37 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p5 : 5.61 84.50 5.28 0.00 0.00 351300.68 7536.64 470905.95 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p6 : 5.43 103.14 6.45 0.00 0.00 292145.74 4617.31 375580.86 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p6 : 5.61 84.49 5.28 0.00 0.00 349801.86 7357.91 453747.43 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x20 00:11:54.312 Malloc2p7 : 5.43 103.12 6.44 0.00 0.00 291371.94 4736.47 366048.35 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x20 length 0x20 00:11:54.312 Malloc2p7 : 5.61 84.47 5.28 0.00 0.00 348371.73 6583.39 440401.92 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x100 00:11:54.312 TestPT : 5.56 178.77 11.17 0.00 0.00 668246.51 34793.66 1487071.42 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x100 length 0x100 00:11:54.312 TestPT : 5.70 145.36 9.09 0.00 0.00 800419.72 50998.92 1784485.70 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.312 Verification LBA range: start 0x0 length 0x200 00:11:54.312 raid0 : 5.56 182.88 11.43 0.00 0.00 648640.26 25261.15 1487071.42 00:11:54.312 [2024-11-18T14:13:46.386Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x200 length 0x200 00:11:54.313 raid0 : 5.70 150.42 9.40 0.00 0.00 763680.63 32887.16 1761607.68 00:11:54.313 [2024-11-18T14:13:46.387Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x0 length 0x200 00:11:54.313 concat0 : 5.56 187.83 11.74 0.00 0.00 626064.13 24903.68 1494697.43 00:11:54.313 [2024-11-18T14:13:46.387Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x200 length 0x200 00:11:54.313 concat0 : 5.67 236.82 14.80 0.00 0.00 481931.42 30384.87 1532827.46 00:11:54.313 [2024-11-18T14:13:46.387Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x0 length 0x100 00:11:54.313 raid1 : 5.60 192.10 12.01 0.00 0.00 602881.93 20971.52 1494697.43 00:11:54.313 [2024-11-18T14:13:46.387Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x100 length 0x100 00:11:54.313 raid1 : 5.73 193.75 12.11 0.00 0.00 583572.75 20018.27 1776859.69 00:11:54.313 [2024-11-18T14:13:46.387Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x0 length 0x4e 00:11:54.313 AIO0 : 5.56 193.01 12.06 0.00 0.00 364846.32 7685.59 899868.86 00:11:54.313 [2024-11-18T14:13:46.387Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:11:54.313 Verification LBA range: start 0x4e length 0x4e 00:11:54.313 AIO0 : 5.73 167.42 10.46 0.00 0.00 406558.31 2978.91 1006632.96 00:11:54.313 [2024-11-18T14:13:46.387Z] =================================================================================================================== 00:11:54.313 [2024-11-18T14:13:46.387Z] Total : 5134.49 320.91 0.00 0.00 449070.68 2978.91 1784485.70 00:11:54.313 ************************************ 00:11:54.313 END TEST bdev_verify_big_io 00:11:54.313 ************************************ 00:11:54.313 00:11:54.313 real 0m7.177s 00:11:54.313 user 0m13.012s 00:11:54.313 sys 0m0.553s 00:11:54.313 14:13:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.313 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:11:54.313 14:13:46 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:54.313 14:13:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:54.313 14:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.313 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:11:54.313 ************************************ 00:11:54.313 START TEST bdev_write_zeroes 00:11:54.313 ************************************ 00:11:54.313 14:13:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:54.313 [2024-11-18 14:13:46.137312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:54.313 [2024-11-18 14:13:46.137745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121106 ] 00:11:54.313 [2024-11-18 14:13:46.283000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.313 [2024-11-18 14:13:46.348502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.574 [2024-11-18 14:13:46.516630] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.574 [2024-11-18 14:13:46.517007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.574 [2024-11-18 14:13:46.524546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.574 [2024-11-18 14:13:46.524753] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.574 [2024-11-18 14:13:46.532599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.574 [2024-11-18 14:13:46.532792] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:54.574 [2024-11-18 14:13:46.532968] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:54.574 [2024-11-18 14:13:46.636089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.574 [2024-11-18 14:13:46.636438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.574 [2024-11-18 14:13:46.636539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:54.574 [2024-11-18 14:13:46.636713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.574 [2024-11-18 14:13:46.639404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.574 [2024-11-18 14:13:46.639576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:54.831 Running I/O for 1 seconds... 00:11:56.208 00:11:56.208 Latency(us) 00:11:56.208 [2024-11-18T14:13:48.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc0 : 1.03 6433.12 25.13 0.00 0.00 19880.03 659.08 36223.53 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc1p0 : 1.04 6426.21 25.10 0.00 0.00 19866.93 875.05 35270.28 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc1p1 : 1.04 6419.69 25.08 0.00 0.00 19854.83 867.61 34555.35 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p0 : 1.04 6413.24 25.05 0.00 0.00 19838.82 871.33 33602.09 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p1 : 1.04 6406.83 25.03 0.00 0.00 19824.64 882.50 32887.16 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p2 : 1.04 6400.31 25.00 0.00 0.00 19810.79 875.05 31933.91 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p3 : 1.04 6393.94 24.98 0.00 0.00 19789.51 878.78 30980.65 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p4 : 1.04 6386.60 24.95 0.00 0.00 19775.69 878.78 30146.56 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p5 : 1.04 6380.25 24.92 0.00 0.00 19757.55 882.50 29193.31 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p6 : 1.04 6373.82 24.90 0.00 0.00 19740.35 875.05 28359.21 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 Malloc2p7 : 1.05 6367.51 24.87 0.00 0.00 19721.76 871.33 27525.12 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 TestPT : 1.05 6361.09 24.85 0.00 0.00 19704.99 904.84 26691.03 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 raid0 : 1.05 6353.74 24.82 0.00 0.00 19679.39 1429.88 25141.99 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 concat0 : 1.05 6346.54 24.79 0.00 0.00 19644.02 1429.88 23712.12 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 raid1 : 1.05 6337.24 24.75 0.00 0.00 19600.58 2204.39 21567.30 00:11:56.208 [2024-11-18T14:13:48.282Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:56.208 AIO0 : 1.05 6427.05 25.11 0.00 0.00 19251.11 448.70 20971.52 00:11:56.208 [2024-11-18T14:13:48.282Z] =================================================================================================================== 00:11:56.208 [2024-11-18T14:13:48.282Z] Total : 102227.17 399.32 0.00 0.00 19733.30 448.70 36223.53 00:11:56.469 ************************************ 00:11:56.469 END TEST bdev_write_zeroes 00:11:56.469 ************************************ 00:11:56.469 00:11:56.469 real 0m2.365s 00:11:56.469 user 0m1.794s 00:11:56.469 sys 0m0.385s 00:11:56.469 14:13:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.469 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:11:56.469 14:13:48 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.469 14:13:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:56.469 14:13:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.469 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:11:56.469 ************************************ 00:11:56.469 START TEST bdev_json_nonenclosed 00:11:56.469 ************************************ 00:11:56.469 14:13:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.728 [2024-11-18 14:13:48.566091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.728 [2024-11-18 14:13:48.566560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121158 ] 00:11:56.729 [2024-11-18 14:13:48.716041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.729 [2024-11-18 14:13:48.784675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.729 [2024-11-18 14:13:48.785247] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:56.729 [2024-11-18 14:13:48.785445] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:56.987 ************************************ 00:11:56.987 END TEST bdev_json_nonenclosed 00:11:56.987 ************************************ 00:11:56.987 00:11:56.987 real 0m0.391s 00:11:56.987 user 0m0.166s 00:11:56.987 sys 0m0.124s 00:11:56.987 14:13:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:56.987 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:11:56.987 14:13:48 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.987 14:13:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:56.987 14:13:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.987 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:11:56.987 ************************************ 00:11:56.987 START TEST bdev_json_nonarray 00:11:56.987 ************************************ 00:11:56.987 14:13:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.987 [2024-11-18 14:13:49.016183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.987 [2024-11-18 14:13:49.016654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121196 ] 00:11:57.246 [2024-11-18 14:13:49.160298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.246 [2024-11-18 14:13:49.225639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.246 [2024-11-18 14:13:49.226138] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:57.246 [2024-11-18 14:13:49.226275] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:57.506 ************************************ 00:11:57.506 END TEST bdev_json_nonarray 00:11:57.506 ************************************ 00:11:57.506 00:11:57.506 real 0m0.382s 00:11:57.506 user 0m0.187s 00:11:57.506 sys 0m0.094s 00:11:57.506 14:13:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:57.506 14:13:49 -- common/autotest_common.sh@10 -- # set +x 00:11:57.506 14:13:49 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:11:57.506 14:13:49 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:11:57.506 14:13:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:57.506 14:13:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.506 14:13:49 -- common/autotest_common.sh@10 -- # set +x 00:11:57.506 ************************************ 00:11:57.506 START TEST bdev_qos 00:11:57.506 ************************************ 00:11:57.506 14:13:49 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:11:57.506 14:13:49 -- bdev/blockdev.sh@444 -- # QOS_PID=121218 00:11:57.506 14:13:49 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 121218' 00:11:57.506 Process qos testing pid: 121218 00:11:57.506 14:13:49 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:11:57.506 14:13:49 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:11:57.506 14:13:49 -- bdev/blockdev.sh@447 -- # waitforlisten 121218 00:11:57.506 14:13:49 -- common/autotest_common.sh@829 -- # '[' -z 121218 ']' 00:11:57.506 14:13:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.506 14:13:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.506 14:13:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.506 14:13:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.506 14:13:49 -- common/autotest_common.sh@10 -- # set +x 00:11:57.506 [2024-11-18 14:13:49.456915] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:57.506 [2024-11-18 14:13:49.457849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121218 ] 00:11:57.764 [2024-11-18 14:13:49.607375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.764 [2024-11-18 14:13:49.678596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.331 14:13:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.331 14:13:50 -- common/autotest_common.sh@862 -- # return 0 00:11:58.331 14:13:50 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:11:58.331 14:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.331 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.591 Malloc_0 00:11:58.591 14:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.591 14:13:50 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:11:58.591 14:13:50 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:11:58.591 14:13:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.591 14:13:50 -- common/autotest_common.sh@899 -- # local i 00:11:58.591 14:13:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.591 14:13:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.591 14:13:50 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:58.591 14:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.591 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.591 14:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.591 14:13:50 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:11:58.591 14:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.591 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.591 [ 00:11:58.591 { 00:11:58.591 "name": "Malloc_0", 00:11:58.591 "aliases": [ 00:11:58.591 "a251f132-8849-4b65-ab68-b4b6d2773399" 00:11:58.591 ], 00:11:58.591 "product_name": "Malloc disk", 00:11:58.591 "block_size": 512, 00:11:58.591 "num_blocks": 262144, 00:11:58.591 "uuid": "a251f132-8849-4b65-ab68-b4b6d2773399", 00:11:58.591 "assigned_rate_limits": { 00:11:58.591 "rw_ios_per_sec": 0, 00:11:58.591 "rw_mbytes_per_sec": 0, 00:11:58.591 "r_mbytes_per_sec": 0, 00:11:58.591 "w_mbytes_per_sec": 0 00:11:58.591 }, 00:11:58.591 "claimed": false, 00:11:58.591 "zoned": false, 00:11:58.591 "supported_io_types": { 00:11:58.591 "read": true, 00:11:58.591 "write": true, 00:11:58.591 "unmap": true, 00:11:58.591 "write_zeroes": true, 00:11:58.591 "flush": true, 00:11:58.591 "reset": true, 00:11:58.591 "compare": false, 00:11:58.591 "compare_and_write": false, 00:11:58.591 "abort": true, 00:11:58.591 "nvme_admin": false, 00:11:58.591 "nvme_io": false 00:11:58.591 }, 00:11:58.591 "memory_domains": [ 00:11:58.591 { 00:11:58.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.591 "dma_device_type": 2 00:11:58.591 } 00:11:58.591 ], 00:11:58.591 "driver_specific": {} 00:11:58.591 } 00:11:58.591 ] 00:11:58.591 14:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.591 14:13:50 -- common/autotest_common.sh@905 -- # return 0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:11:58.591 14:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.591 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.591 Null_1 00:11:58.591 14:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.591 14:13:50 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:11:58.591 14:13:50 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:11:58.591 14:13:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.591 14:13:50 -- common/autotest_common.sh@899 -- # local i 00:11:58.591 14:13:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.591 14:13:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.591 14:13:50 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:11:58.591 14:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.591 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.591 14:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.591 14:13:50 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:11:58.591 14:13:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.591 14:13:50 -- common/autotest_common.sh@10 -- # set +x 00:11:58.591 [ 00:11:58.591 { 00:11:58.591 "name": "Null_1", 00:11:58.591 "aliases": [ 00:11:58.591 "515fdf87-43a5-450c-a9e5-cd14211e5547" 00:11:58.591 ], 00:11:58.591 "product_name": "Null disk", 00:11:58.591 "block_size": 512, 00:11:58.591 "num_blocks": 262144, 00:11:58.591 "uuid": "515fdf87-43a5-450c-a9e5-cd14211e5547", 00:11:58.591 "assigned_rate_limits": { 00:11:58.591 "rw_ios_per_sec": 0, 00:11:58.591 "rw_mbytes_per_sec": 0, 00:11:58.591 "r_mbytes_per_sec": 0, 00:11:58.591 "w_mbytes_per_sec": 0 00:11:58.591 }, 00:11:58.591 "claimed": false, 00:11:58.591 "zoned": false, 00:11:58.591 "supported_io_types": { 00:11:58.591 "read": true, 00:11:58.591 "write": true, 00:11:58.591 "unmap": false, 00:11:58.591 "write_zeroes": true, 00:11:58.591 "flush": false, 00:11:58.591 "reset": true, 00:11:58.591 "compare": false, 00:11:58.591 "compare_and_write": false, 00:11:58.591 "abort": true, 00:11:58.591 "nvme_admin": false, 00:11:58.591 "nvme_io": false 00:11:58.591 }, 00:11:58.591 "driver_specific": {} 00:11:58.591 } 00:11:58.591 ] 00:11:58.591 14:13:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.591 14:13:50 -- common/autotest_common.sh@905 -- # return 0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@455 -- # qos_function_test 00:11:58.591 14:13:50 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:11:58.591 14:13:50 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.591 14:13:50 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:11:58.591 14:13:50 -- bdev/blockdev.sh@410 -- # local io_result=0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:11:58.591 14:13:50 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@375 -- # local iostat_result 00:11:58.591 14:13:50 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:11:58.591 14:13:50 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:11:58.591 14:13:50 -- bdev/blockdev.sh@376 -- # tail -1 00:11:58.591 Running I/O for 60 seconds... 00:12:03.907 14:13:55 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 84977.04 339908.16 0.00 0.00 344064.00 0.00 0.00 ' 00:12:03.907 14:13:55 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:03.907 14:13:55 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:03.907 14:13:55 -- bdev/blockdev.sh@378 -- # iostat_result=84977.04 00:12:03.907 14:13:55 -- bdev/blockdev.sh@383 -- # echo 84977 00:12:03.907 14:13:55 -- bdev/blockdev.sh@414 -- # io_result=84977 00:12:03.907 14:13:55 -- bdev/blockdev.sh@416 -- # iops_limit=21000 00:12:03.907 14:13:55 -- bdev/blockdev.sh@417 -- # '[' 21000 -gt 1000 ']' 00:12:03.907 14:13:55 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:12:03.907 14:13:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.907 14:13:55 -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 14:13:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.907 14:13:55 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:12:03.907 14:13:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:03.907 14:13:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:03.907 14:13:55 -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 ************************************ 00:12:03.907 START TEST bdev_qos_iops 00:12:03.907 ************************************ 00:12:03.907 14:13:55 -- common/autotest_common.sh@1114 -- # run_qos_test 21000 IOPS Malloc_0 00:12:03.907 14:13:55 -- bdev/blockdev.sh@387 -- # local qos_limit=21000 00:12:03.907 14:13:55 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:03.907 14:13:55 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:12:03.907 14:13:55 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:03.907 14:13:55 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:03.907 14:13:55 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:03.907 14:13:55 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:03.907 14:13:55 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:03.907 14:13:55 -- bdev/blockdev.sh@376 -- # tail -1 00:12:09.184 14:14:00 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 21000.44 84001.75 0.00 0.00 85344.00 0.00 0.00 ' 00:12:09.184 14:14:00 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:09.184 14:14:00 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:09.184 14:14:00 -- bdev/blockdev.sh@378 -- # iostat_result=21000.44 00:12:09.184 14:14:00 -- bdev/blockdev.sh@383 -- # echo 21000 00:12:09.184 14:14:00 -- bdev/blockdev.sh@390 -- # qos_result=21000 00:12:09.184 14:14:00 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:12:09.184 14:14:00 -- bdev/blockdev.sh@394 -- # lower_limit=18900 00:12:09.184 14:14:00 -- bdev/blockdev.sh@395 -- # upper_limit=23100 00:12:09.184 14:14:00 -- bdev/blockdev.sh@398 -- # '[' 21000 -lt 18900 ']' 00:12:09.184 14:14:00 -- bdev/blockdev.sh@398 -- # '[' 21000 -gt 23100 ']' 00:12:09.184 00:12:09.184 real 0m5.211s 00:12:09.184 user 0m0.114s 00:12:09.184 sys 0m0.026s 00:12:09.184 14:14:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:09.184 14:14:00 -- common/autotest_common.sh@10 -- # set +x 00:12:09.184 ************************************ 00:12:09.184 END TEST bdev_qos_iops 00:12:09.184 ************************************ 00:12:09.184 14:14:00 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:12:09.184 14:14:00 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:09.184 14:14:00 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:09.184 14:14:00 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:09.184 14:14:00 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:09.184 14:14:00 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:09.184 14:14:00 -- bdev/blockdev.sh@376 -- # tail -1 00:12:14.449 14:14:06 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 30522.90 122091.61 0.00 0.00 123904.00 0.00 0.00 ' 00:12:14.449 14:14:06 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:14.449 14:14:06 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:14.449 14:14:06 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:14.449 14:14:06 -- bdev/blockdev.sh@380 -- # iostat_result=123904.00 00:12:14.449 14:14:06 -- bdev/blockdev.sh@383 -- # echo 123904 00:12:14.449 14:14:06 -- bdev/blockdev.sh@425 -- # bw_limit=123904 00:12:14.449 14:14:06 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:12:14.449 14:14:06 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:12:14.449 14:14:06 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:12:14.449 14:14:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.449 14:14:06 -- common/autotest_common.sh@10 -- # set +x 00:12:14.449 14:14:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.449 14:14:06 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:12:14.449 14:14:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:14.449 14:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.449 14:14:06 -- common/autotest_common.sh@10 -- # set +x 00:12:14.449 ************************************ 00:12:14.449 START TEST bdev_qos_bw 00:12:14.449 ************************************ 00:12:14.449 14:14:06 -- common/autotest_common.sh@1114 -- # run_qos_test 12 BANDWIDTH Null_1 00:12:14.449 14:14:06 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:12:14.449 14:14:06 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:14.449 14:14:06 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:12:14.450 14:14:06 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:14.450 14:14:06 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:14.450 14:14:06 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:14.450 14:14:06 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:14.450 14:14:06 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:14.450 14:14:06 -- bdev/blockdev.sh@376 -- # tail -1 00:12:19.718 14:14:11 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3071.12 12284.48 0.00 0.00 12532.00 0.00 0.00 ' 00:12:19.718 14:14:11 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:19.718 14:14:11 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:19.718 14:14:11 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:19.718 14:14:11 -- bdev/blockdev.sh@380 -- # iostat_result=12532.00 00:12:19.718 14:14:11 -- bdev/blockdev.sh@383 -- # echo 12532 00:12:19.718 ************************************ 00:12:19.718 END TEST bdev_qos_bw 00:12:19.718 ************************************ 00:12:19.718 14:14:11 -- bdev/blockdev.sh@390 -- # qos_result=12532 00:12:19.718 14:14:11 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:19.718 14:14:11 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:12:19.718 14:14:11 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:12:19.718 14:14:11 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:12:19.718 14:14:11 -- bdev/blockdev.sh@398 -- # '[' 12532 -lt 11059 ']' 00:12:19.718 14:14:11 -- bdev/blockdev.sh@398 -- # '[' 12532 -gt 13516 ']' 00:12:19.718 00:12:19.718 real 0m5.230s 00:12:19.718 user 0m0.120s 00:12:19.718 sys 0m0.023s 00:12:19.718 14:14:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:19.718 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 14:14:11 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:19.718 14:14:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.718 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 14:14:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.718 14:14:11 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:19.718 14:14:11 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:19.718 14:14:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.718 14:14:11 -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 ************************************ 00:12:19.718 START TEST bdev_qos_ro_bw 00:12:19.718 ************************************ 00:12:19.718 14:14:11 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:19.718 14:14:11 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:12:19.718 14:14:11 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:19.718 14:14:11 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:12:19.718 14:14:11 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:19.718 14:14:11 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:19.718 14:14:11 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:19.718 14:14:11 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:19.718 14:14:11 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:19.718 14:14:11 -- bdev/blockdev.sh@376 -- # tail -1 00:12:24.995 14:14:16 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 513.01 2052.03 0.00 0.00 2072.00 0.00 0.00 ' 00:12:24.995 14:14:16 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:12:24.995 14:14:16 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:24.995 14:14:16 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:12:24.995 14:14:16 -- bdev/blockdev.sh@380 -- # iostat_result=2072.00 00:12:24.995 14:14:16 -- bdev/blockdev.sh@383 -- # echo 2072 00:12:24.995 ************************************ 00:12:24.995 END TEST bdev_qos_ro_bw 00:12:24.995 ************************************ 00:12:24.995 14:14:16 -- bdev/blockdev.sh@390 -- # qos_result=2072 00:12:24.995 14:14:16 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:24.995 14:14:16 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:12:24.995 14:14:16 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:12:24.995 14:14:16 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:12:24.995 14:14:16 -- bdev/blockdev.sh@398 -- # '[' 2072 -lt 1843 ']' 00:12:24.995 14:14:16 -- bdev/blockdev.sh@398 -- # '[' 2072 -gt 2252 ']' 00:12:24.995 00:12:24.995 real 0m5.175s 00:12:24.995 user 0m0.123s 00:12:24.995 sys 0m0.023s 00:12:24.995 14:14:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:24.995 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:24.995 14:14:16 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:12:24.995 14:14:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.995 14:14:16 -- common/autotest_common.sh@10 -- # set +x 00:12:25.253 14:14:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.253 14:14:17 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:12:25.253 14:14:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.253 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:12:25.512 00:12:25.512 Latency(us) 00:12:25.512 [2024-11-18T14:14:17.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.512 [2024-11-18T14:14:17.587Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:25.513 Malloc_0 : 26.66 28483.73 111.26 0.00 0.00 8904.81 2144.81 503316.48 00:12:25.513 [2024-11-18T14:14:17.587Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:25.513 Null_1 : 26.78 29114.26 113.73 0.00 0.00 8777.15 633.02 116296.61 00:12:25.513 [2024-11-18T14:14:17.587Z] =================================================================================================================== 00:12:25.513 [2024-11-18T14:14:17.587Z] Total : 57597.99 224.99 0.00 0.00 8840.14 633.02 503316.48 00:12:25.513 0 00:12:25.513 14:14:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.513 14:14:17 -- bdev/blockdev.sh@459 -- # killprocess 121218 00:12:25.513 14:14:17 -- common/autotest_common.sh@936 -- # '[' -z 121218 ']' 00:12:25.513 14:14:17 -- common/autotest_common.sh@940 -- # kill -0 121218 00:12:25.513 14:14:17 -- common/autotest_common.sh@941 -- # uname 00:12:25.513 14:14:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.513 14:14:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121218 00:12:25.513 killing process with pid 121218 00:12:25.513 Received shutdown signal, test time was about 26.808278 seconds 00:12:25.513 00:12:25.513 Latency(us) 00:12:25.513 [2024-11-18T14:14:17.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.513 [2024-11-18T14:14:17.587Z] =================================================================================================================== 00:12:25.513 [2024-11-18T14:14:17.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:25.513 14:14:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:25.513 14:14:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:25.513 14:14:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121218' 00:12:25.513 14:14:17 -- common/autotest_common.sh@955 -- # kill 121218 00:12:25.513 14:14:17 -- common/autotest_common.sh@960 -- # wait 121218 00:12:25.772 ************************************ 00:12:25.772 END TEST bdev_qos 00:12:25.772 ************************************ 00:12:25.772 14:14:17 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:12:25.772 00:12:25.772 real 0m28.334s 00:12:25.772 user 0m29.099s 00:12:25.772 sys 0m0.567s 00:12:25.772 14:14:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:25.772 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:12:25.772 14:14:17 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:12:25.772 14:14:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:25.772 14:14:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.772 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:12:25.772 ************************************ 00:12:25.772 START TEST bdev_qd_sampling 00:12:25.772 ************************************ 00:12:25.772 14:14:17 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:12:25.772 14:14:17 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:12:25.772 14:14:17 -- bdev/blockdev.sh@539 -- # QD_PID=121688 00:12:25.772 14:14:17 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:12:25.772 14:14:17 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 121688' 00:12:25.772 Process bdev QD sampling period testing pid: 121688 00:12:25.772 14:14:17 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:12:25.772 14:14:17 -- bdev/blockdev.sh@542 -- # waitforlisten 121688 00:12:25.772 14:14:17 -- common/autotest_common.sh@829 -- # '[' -z 121688 ']' 00:12:25.772 14:14:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.772 14:14:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.772 14:14:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.772 14:14:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.772 14:14:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.031 [2024-11-18 14:14:17.858709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:26.031 [2024-11-18 14:14:17.858959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121688 ] 00:12:26.031 [2024-11-18 14:14:18.016868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.031 [2024-11-18 14:14:18.095462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.031 [2024-11-18 14:14:18.095474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.966 14:14:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.966 14:14:18 -- common/autotest_common.sh@862 -- # return 0 00:12:26.966 14:14:18 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:12:26.966 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.966 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.966 Malloc_QD 00:12:26.966 14:14:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.966 14:14:18 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:12:26.966 14:14:18 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:12:26.966 14:14:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.966 14:14:18 -- common/autotest_common.sh@899 -- # local i 00:12:26.966 14:14:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.966 14:14:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.966 14:14:18 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:26.966 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.966 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.966 14:14:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.966 14:14:18 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:12:26.966 14:14:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.966 14:14:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.966 [ 00:12:26.966 { 00:12:26.966 "name": "Malloc_QD", 00:12:26.966 "aliases": [ 00:12:26.966 "55b097f0-82f8-453f-914e-b05c5a7153af" 00:12:26.966 ], 00:12:26.966 "product_name": "Malloc disk", 00:12:26.966 "block_size": 512, 00:12:26.966 "num_blocks": 262144, 00:12:26.966 "uuid": "55b097f0-82f8-453f-914e-b05c5a7153af", 00:12:26.966 "assigned_rate_limits": { 00:12:26.966 "rw_ios_per_sec": 0, 00:12:26.966 "rw_mbytes_per_sec": 0, 00:12:26.966 "r_mbytes_per_sec": 0, 00:12:26.966 "w_mbytes_per_sec": 0 00:12:26.966 }, 00:12:26.966 "claimed": false, 00:12:26.966 "zoned": false, 00:12:26.966 "supported_io_types": { 00:12:26.966 "read": true, 00:12:26.966 "write": true, 00:12:26.966 "unmap": true, 00:12:26.966 "write_zeroes": true, 00:12:26.966 "flush": true, 00:12:26.966 "reset": true, 00:12:26.966 "compare": false, 00:12:26.966 "compare_and_write": false, 00:12:26.966 "abort": true, 00:12:26.966 "nvme_admin": false, 00:12:26.966 "nvme_io": false 00:12:26.966 }, 00:12:26.966 "memory_domains": [ 00:12:26.966 { 00:12:26.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.966 "dma_device_type": 2 00:12:26.966 } 00:12:26.966 ], 00:12:26.966 "driver_specific": {} 00:12:26.966 } 00:12:26.966 ] 00:12:26.966 14:14:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.966 14:14:18 -- common/autotest_common.sh@905 -- # return 0 00:12:26.966 14:14:18 -- bdev/blockdev.sh@548 -- # sleep 2 00:12:26.966 14:14:18 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:26.966 Running I/O for 5 seconds... 00:12:28.876 14:14:20 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:12:28.876 14:14:20 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:12:28.876 14:14:20 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:12:28.876 14:14:20 -- bdev/blockdev.sh@519 -- # local iostats 00:12:28.876 14:14:20 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:12:28.876 14:14:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.876 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:12:28.876 14:14:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.876 14:14:20 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:12:28.876 14:14:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.876 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:12:28.876 14:14:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.876 14:14:20 -- bdev/blockdev.sh@523 -- # iostats='{ 00:12:28.876 "tick_rate": 2200000000, 00:12:28.876 "ticks": 1491406019104, 00:12:28.876 "bdevs": [ 00:12:28.876 { 00:12:28.876 "name": "Malloc_QD", 00:12:28.876 "bytes_read": 1023447552, 00:12:28.876 "num_read_ops": 249859, 00:12:28.876 "bytes_written": 0, 00:12:28.876 "num_write_ops": 0, 00:12:28.876 "bytes_unmapped": 0, 00:12:28.876 "num_unmap_ops": 0, 00:12:28.876 "bytes_copied": 0, 00:12:28.876 "num_copy_ops": 0, 00:12:28.876 "read_latency_ticks": 2151647731040, 00:12:28.876 "max_read_latency_ticks": 10937316, 00:12:28.876 "min_read_latency_ticks": 355818, 00:12:28.876 "write_latency_ticks": 0, 00:12:28.876 "max_write_latency_ticks": 0, 00:12:28.876 "min_write_latency_ticks": 0, 00:12:28.876 "unmap_latency_ticks": 0, 00:12:28.876 "max_unmap_latency_ticks": 0, 00:12:28.876 "min_unmap_latency_ticks": 0, 00:12:28.876 "copy_latency_ticks": 0, 00:12:28.876 "max_copy_latency_ticks": 0, 00:12:28.876 "min_copy_latency_ticks": 0, 00:12:28.876 "io_error": {}, 00:12:28.876 "queue_depth_polling_period": 10, 00:12:28.876 "queue_depth": 512, 00:12:28.876 "io_time": 20, 00:12:28.876 "weighted_io_time": 10240 00:12:28.876 } 00:12:28.876 ] 00:12:28.876 }' 00:12:28.876 14:14:20 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:12:28.876 14:14:20 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:12:28.876 14:14:20 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:12:28.876 14:14:20 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:12:28.876 14:14:20 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:12:28.876 14:14:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.876 14:14:20 -- common/autotest_common.sh@10 -- # set +x 00:12:28.876 00:12:28.876 Latency(us) 00:12:28.876 [2024-11-18T14:14:20.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.876 [2024-11-18T14:14:20.950Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:28.876 Malloc_QD : 1.99 64981.59 253.83 0.00 0.00 3930.33 990.49 5779.08 00:12:28.876 [2024-11-18T14:14:20.950Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:28.876 Malloc_QD : 1.99 65582.58 256.18 0.00 0.00 3894.83 703.77 4468.36 00:12:28.876 [2024-11-18T14:14:20.950Z] =================================================================================================================== 00:12:28.876 [2024-11-18T14:14:20.950Z] Total : 130564.17 510.02 0.00 0.00 3912.49 703.77 5779.08 00:12:29.135 0 00:12:29.135 14:14:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.135 14:14:20 -- bdev/blockdev.sh@552 -- # killprocess 121688 00:12:29.135 14:14:20 -- common/autotest_common.sh@936 -- # '[' -z 121688 ']' 00:12:29.135 14:14:20 -- common/autotest_common.sh@940 -- # kill -0 121688 00:12:29.135 14:14:20 -- common/autotest_common.sh@941 -- # uname 00:12:29.135 14:14:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.135 14:14:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121688 00:12:29.135 14:14:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.135 14:14:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.135 14:14:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121688' 00:12:29.135 killing process with pid 121688 00:12:29.135 14:14:20 -- common/autotest_common.sh@955 -- # kill 121688 00:12:29.135 Received shutdown signal, test time was about 2.047923 seconds 00:12:29.135 00:12:29.135 Latency(us) 00:12:29.135 [2024-11-18T14:14:21.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.135 [2024-11-18T14:14:21.209Z] =================================================================================================================== 00:12:29.135 [2024-11-18T14:14:21.209Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:29.135 14:14:20 -- common/autotest_common.sh@960 -- # wait 121688 00:12:29.394 14:14:21 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:12:29.394 00:12:29.394 real 0m3.522s 00:12:29.394 user 0m6.779s 00:12:29.394 sys 0m0.374s 00:12:29.394 14:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.394 14:14:21 -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 ************************************ 00:12:29.394 END TEST bdev_qd_sampling 00:12:29.394 ************************************ 00:12:29.394 14:14:21 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:12:29.394 14:14:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:29.394 14:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.394 14:14:21 -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 ************************************ 00:12:29.394 START TEST bdev_error 00:12:29.394 ************************************ 00:12:29.394 14:14:21 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:12:29.394 14:14:21 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:12:29.394 14:14:21 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:12:29.394 14:14:21 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:12:29.394 14:14:21 -- bdev/blockdev.sh@470 -- # ERR_PID=121772 00:12:29.394 Process error testing pid: 121772 00:12:29.394 14:14:21 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 121772' 00:12:29.394 14:14:21 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:12:29.394 14:14:21 -- bdev/blockdev.sh@472 -- # waitforlisten 121772 00:12:29.394 14:14:21 -- common/autotest_common.sh@829 -- # '[' -z 121772 ']' 00:12:29.394 14:14:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.394 14:14:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.394 14:14:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.394 14:14:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.394 14:14:21 -- common/autotest_common.sh@10 -- # set +x 00:12:29.394 [2024-11-18 14:14:21.434593] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:29.394 [2024-11-18 14:14:21.434845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121772 ] 00:12:29.653 [2024-11-18 14:14:21.572422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.653 [2024-11-18 14:14:21.642119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.588 14:14:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.588 14:14:22 -- common/autotest_common.sh@862 -- # return 0 00:12:30.588 14:14:22 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 Dev_1 00:12:30.588 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.588 14:14:22 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:12:30.588 14:14:22 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:12:30.588 14:14:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:30.588 14:14:22 -- common/autotest_common.sh@899 -- # local i 00:12:30.588 14:14:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:30.588 14:14:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:30.588 14:14:22 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.588 14:14:22 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 [ 00:12:30.588 { 00:12:30.588 "name": "Dev_1", 00:12:30.588 "aliases": [ 00:12:30.588 "1d8ddf4d-8a65-4412-85ee-69bfbc79aa28" 00:12:30.588 ], 00:12:30.588 "product_name": "Malloc disk", 00:12:30.588 "block_size": 512, 00:12:30.588 "num_blocks": 262144, 00:12:30.588 "uuid": "1d8ddf4d-8a65-4412-85ee-69bfbc79aa28", 00:12:30.588 "assigned_rate_limits": { 00:12:30.588 "rw_ios_per_sec": 0, 00:12:30.588 "rw_mbytes_per_sec": 0, 00:12:30.588 "r_mbytes_per_sec": 0, 00:12:30.588 "w_mbytes_per_sec": 0 00:12:30.588 }, 00:12:30.588 "claimed": false, 00:12:30.588 "zoned": false, 00:12:30.588 "supported_io_types": { 00:12:30.588 "read": true, 00:12:30.588 "write": true, 00:12:30.588 "unmap": true, 00:12:30.588 "write_zeroes": true, 00:12:30.588 "flush": true, 00:12:30.588 "reset": true, 00:12:30.588 "compare": false, 00:12:30.588 "compare_and_write": false, 00:12:30.588 "abort": true, 00:12:30.588 "nvme_admin": false, 00:12:30.588 "nvme_io": false 00:12:30.588 }, 00:12:30.588 "memory_domains": [ 00:12:30.588 { 00:12:30.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.588 "dma_device_type": 2 00:12:30.588 } 00:12:30.588 ], 00:12:30.588 "driver_specific": {} 00:12:30.588 } 00:12:30.588 ] 00:12:30.588 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.588 14:14:22 -- common/autotest_common.sh@905 -- # return 0 00:12:30.588 14:14:22 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 true 00:12:30.588 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.588 14:14:22 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 Dev_2 00:12:30.588 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.588 14:14:22 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:12:30.588 14:14:22 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:12:30.588 14:14:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:30.588 14:14:22 -- common/autotest_common.sh@899 -- # local i 00:12:30.588 14:14:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:30.588 14:14:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:30.588 14:14:22 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.588 14:14:22 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:30.588 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.588 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 [ 00:12:30.588 { 00:12:30.588 "name": "Dev_2", 00:12:30.588 "aliases": [ 00:12:30.588 "461a03d6-0280-494f-ad62-cb4f77ce12bb" 00:12:30.588 ], 00:12:30.588 "product_name": "Malloc disk", 00:12:30.588 "block_size": 512, 00:12:30.588 "num_blocks": 262144, 00:12:30.588 "uuid": "461a03d6-0280-494f-ad62-cb4f77ce12bb", 00:12:30.588 "assigned_rate_limits": { 00:12:30.588 "rw_ios_per_sec": 0, 00:12:30.588 "rw_mbytes_per_sec": 0, 00:12:30.588 "r_mbytes_per_sec": 0, 00:12:30.588 "w_mbytes_per_sec": 0 00:12:30.588 }, 00:12:30.588 "claimed": false, 00:12:30.588 "zoned": false, 00:12:30.588 "supported_io_types": { 00:12:30.588 "read": true, 00:12:30.588 "write": true, 00:12:30.588 "unmap": true, 00:12:30.588 "write_zeroes": true, 00:12:30.589 "flush": true, 00:12:30.589 "reset": true, 00:12:30.589 "compare": false, 00:12:30.589 "compare_and_write": false, 00:12:30.589 "abort": true, 00:12:30.589 "nvme_admin": false, 00:12:30.589 "nvme_io": false 00:12:30.589 }, 00:12:30.589 "memory_domains": [ 00:12:30.589 { 00:12:30.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.589 "dma_device_type": 2 00:12:30.589 } 00:12:30.589 ], 00:12:30.589 "driver_specific": {} 00:12:30.589 } 00:12:30.589 ] 00:12:30.589 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.589 14:14:22 -- common/autotest_common.sh@905 -- # return 0 00:12:30.589 14:14:22 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:30.589 14:14:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.589 14:14:22 -- common/autotest_common.sh@10 -- # set +x 00:12:30.589 14:14:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.589 14:14:22 -- bdev/blockdev.sh@482 -- # sleep 1 00:12:30.589 14:14:22 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:30.589 Running I/O for 5 seconds... 00:12:31.529 14:14:23 -- bdev/blockdev.sh@485 -- # kill -0 121772 00:12:31.529 Process is existed as continue on error is set. Pid: 121772 00:12:31.529 14:14:23 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 121772' 00:12:31.529 14:14:23 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:12:31.529 14:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.529 14:14:23 -- common/autotest_common.sh@10 -- # set +x 00:12:31.529 14:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.529 14:14:23 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:12:31.529 14:14:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.529 14:14:23 -- common/autotest_common.sh@10 -- # set +x 00:12:31.529 14:14:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.529 14:14:23 -- bdev/blockdev.sh@495 -- # sleep 5 00:12:31.529 Timeout while waiting for response: 00:12:31.529 00:12:31.529 00:12:35.719 00:12:35.719 Latency(us) 00:12:35.719 [2024-11-18T14:14:27.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.719 [2024-11-18T14:14:27.793Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:35.719 EE_Dev_1 : 0.93 46592.60 182.00 5.36 0.00 340.93 188.97 826.65 00:12:35.719 [2024-11-18T14:14:27.793Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:35.719 Dev_2 : 5.00 104357.78 407.65 0.00 0.00 150.98 54.69 31933.91 00:12:35.719 [2024-11-18T14:14:27.793Z] =================================================================================================================== 00:12:35.719 [2024-11-18T14:14:27.793Z] Total : 150950.38 589.65 5.36 0.00 165.58 54.69 31933.91 00:12:36.657 14:14:28 -- bdev/blockdev.sh@497 -- # killprocess 121772 00:12:36.657 14:14:28 -- common/autotest_common.sh@936 -- # '[' -z 121772 ']' 00:12:36.657 14:14:28 -- common/autotest_common.sh@940 -- # kill -0 121772 00:12:36.657 14:14:28 -- common/autotest_common.sh@941 -- # uname 00:12:36.657 14:14:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.657 14:14:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121772 00:12:36.657 14:14:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:36.657 14:14:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:36.657 killing process with pid 121772 00:12:36.657 14:14:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121772' 00:12:36.657 Received shutdown signal, test time was about 5.000000 seconds 00:12:36.657 00:12:36.657 Latency(us) 00:12:36.657 [2024-11-18T14:14:28.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.657 [2024-11-18T14:14:28.731Z] =================================================================================================================== 00:12:36.657 [2024-11-18T14:14:28.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:36.657 14:14:28 -- common/autotest_common.sh@955 -- # kill 121772 00:12:36.657 14:14:28 -- common/autotest_common.sh@960 -- # wait 121772 00:12:36.916 14:14:28 -- bdev/blockdev.sh@501 -- # ERR_PID=121873 00:12:36.916 Process error testing pid: 121873 00:12:36.916 14:14:28 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 121873' 00:12:36.916 14:14:28 -- bdev/blockdev.sh@503 -- # waitforlisten 121873 00:12:36.916 14:14:28 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:12:36.916 14:14:28 -- common/autotest_common.sh@829 -- # '[' -z 121873 ']' 00:12:36.917 14:14:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.917 14:14:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.917 14:14:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.917 14:14:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.917 14:14:28 -- common/autotest_common.sh@10 -- # set +x 00:12:36.917 [2024-11-18 14:14:28.984867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:36.917 [2024-11-18 14:14:28.985443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121873 ] 00:12:37.175 [2024-11-18 14:14:29.131946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.175 [2024-11-18 14:14:29.198645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.111 14:14:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.111 14:14:29 -- common/autotest_common.sh@862 -- # return 0 00:12:38.111 14:14:29 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:12:38.111 14:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.111 14:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.111 Dev_1 00:12:38.111 14:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.111 14:14:29 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:12:38.111 14:14:29 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:12:38.111 14:14:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:38.111 14:14:29 -- common/autotest_common.sh@899 -- # local i 00:12:38.111 14:14:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:38.111 14:14:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:38.111 14:14:29 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:38.111 14:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.111 14:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.111 14:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.111 14:14:29 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:12:38.111 14:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.111 14:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.111 [ 00:12:38.111 { 00:12:38.111 "name": "Dev_1", 00:12:38.111 "aliases": [ 00:12:38.111 "0e3c3e57-6f39-4fa9-87ec-a52951ab64e0" 00:12:38.111 ], 00:12:38.111 "product_name": "Malloc disk", 00:12:38.111 "block_size": 512, 00:12:38.111 "num_blocks": 262144, 00:12:38.111 "uuid": "0e3c3e57-6f39-4fa9-87ec-a52951ab64e0", 00:12:38.111 "assigned_rate_limits": { 00:12:38.111 "rw_ios_per_sec": 0, 00:12:38.111 "rw_mbytes_per_sec": 0, 00:12:38.111 "r_mbytes_per_sec": 0, 00:12:38.111 "w_mbytes_per_sec": 0 00:12:38.111 }, 00:12:38.111 "claimed": false, 00:12:38.111 "zoned": false, 00:12:38.111 "supported_io_types": { 00:12:38.111 "read": true, 00:12:38.111 "write": true, 00:12:38.111 "unmap": true, 00:12:38.111 "write_zeroes": true, 00:12:38.111 "flush": true, 00:12:38.111 "reset": true, 00:12:38.111 "compare": false, 00:12:38.111 "compare_and_write": false, 00:12:38.111 "abort": true, 00:12:38.111 "nvme_admin": false, 00:12:38.111 "nvme_io": false 00:12:38.111 }, 00:12:38.111 "memory_domains": [ 00:12:38.111 { 00:12:38.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.111 "dma_device_type": 2 00:12:38.111 } 00:12:38.111 ], 00:12:38.111 "driver_specific": {} 00:12:38.111 } 00:12:38.111 ] 00:12:38.111 14:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.111 14:14:29 -- common/autotest_common.sh@905 -- # return 0 00:12:38.111 14:14:29 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:12:38.111 14:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.111 14:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.111 true 00:12:38.111 14:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.111 14:14:29 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:12:38.111 14:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.111 14:14:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.111 Dev_2 00:12:38.111 14:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.111 14:14:30 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:12:38.111 14:14:30 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:12:38.111 14:14:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:38.111 14:14:30 -- common/autotest_common.sh@899 -- # local i 00:12:38.111 14:14:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:38.111 14:14:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:38.111 14:14:30 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:38.112 14:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.112 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.112 14:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.112 14:14:30 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:12:38.112 14:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.112 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.112 [ 00:12:38.112 { 00:12:38.112 "name": "Dev_2", 00:12:38.112 "aliases": [ 00:12:38.112 "986ae0c6-1bc6-47c8-b740-e941ba941218" 00:12:38.112 ], 00:12:38.112 "product_name": "Malloc disk", 00:12:38.112 "block_size": 512, 00:12:38.112 "num_blocks": 262144, 00:12:38.112 "uuid": "986ae0c6-1bc6-47c8-b740-e941ba941218", 00:12:38.112 "assigned_rate_limits": { 00:12:38.112 "rw_ios_per_sec": 0, 00:12:38.112 "rw_mbytes_per_sec": 0, 00:12:38.112 "r_mbytes_per_sec": 0, 00:12:38.112 "w_mbytes_per_sec": 0 00:12:38.112 }, 00:12:38.112 "claimed": false, 00:12:38.112 "zoned": false, 00:12:38.112 "supported_io_types": { 00:12:38.112 "read": true, 00:12:38.112 "write": true, 00:12:38.112 "unmap": true, 00:12:38.112 "write_zeroes": true, 00:12:38.112 "flush": true, 00:12:38.112 "reset": true, 00:12:38.112 "compare": false, 00:12:38.112 "compare_and_write": false, 00:12:38.112 "abort": true, 00:12:38.112 "nvme_admin": false, 00:12:38.112 "nvme_io": false 00:12:38.112 }, 00:12:38.112 "memory_domains": [ 00:12:38.112 { 00:12:38.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.112 "dma_device_type": 2 00:12:38.112 } 00:12:38.112 ], 00:12:38.112 "driver_specific": {} 00:12:38.112 } 00:12:38.112 ] 00:12:38.112 14:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.112 14:14:30 -- common/autotest_common.sh@905 -- # return 0 00:12:38.112 14:14:30 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:12:38.112 14:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.112 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.112 14:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.112 14:14:30 -- bdev/blockdev.sh@513 -- # NOT wait 121873 00:12:38.112 14:14:30 -- common/autotest_common.sh@650 -- # local es=0 00:12:38.112 14:14:30 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:12:38.112 14:14:30 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 121873 00:12:38.112 14:14:30 -- common/autotest_common.sh@638 -- # local arg=wait 00:12:38.112 14:14:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.112 14:14:30 -- common/autotest_common.sh@642 -- # type -t wait 00:12:38.112 14:14:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.112 14:14:30 -- common/autotest_common.sh@653 -- # wait 121873 00:12:38.112 Running I/O for 5 seconds... 00:12:38.112 task offset: 27192 on job bdev=EE_Dev_1 fails 00:12:38.112 00:12:38.112 Latency(us) 00:12:38.112 [2024-11-18T14:14:30.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.112 [2024-11-18T14:14:30.186Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:38.112 [2024-11-18T14:14:30.186Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:12:38.112 EE_Dev_1 : 0.00 29372.50 114.74 6675.57 0.00 370.55 148.01 670.25 00:12:38.112 [2024-11-18T14:14:30.186Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:12:38.112 Dev_2 : 0.00 21813.22 85.21 0.00 0.00 496.47 143.36 897.40 00:12:38.112 [2024-11-18T14:14:30.186Z] =================================================================================================================== 00:12:38.112 [2024-11-18T14:14:30.186Z] Total : 51185.72 199.94 6675.57 0.00 438.84 143.36 897.40 00:12:38.112 [2024-11-18 14:14:30.130631] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:38.112 request: 00:12:38.112 { 00:12:38.112 "method": "perform_tests", 00:12:38.112 "req_id": 1 00:12:38.112 } 00:12:38.112 Got JSON-RPC error response 00:12:38.112 response: 00:12:38.112 { 00:12:38.112 "code": -32603, 00:12:38.112 "message": "bdevperf failed with error Operation not permitted" 00:12:38.112 } 00:12:38.681 14:14:30 -- common/autotest_common.sh@653 -- # es=255 00:12:38.681 14:14:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.681 14:14:30 -- common/autotest_common.sh@662 -- # es=127 00:12:38.681 14:14:30 -- common/autotest_common.sh@663 -- # case "$es" in 00:12:38.681 14:14:30 -- common/autotest_common.sh@670 -- # es=1 00:12:38.681 14:14:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.681 00:12:38.681 real 0m9.193s 00:12:38.681 user 0m9.303s 00:12:38.681 sys 0m0.733s 00:12:38.681 14:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:38.681 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.681 ************************************ 00:12:38.681 END TEST bdev_error 00:12:38.681 ************************************ 00:12:38.681 14:14:30 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:12:38.681 14:14:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:38.681 14:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.681 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.681 ************************************ 00:12:38.681 START TEST bdev_stat 00:12:38.681 ************************************ 00:12:38.681 14:14:30 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:12:38.681 14:14:30 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:12:38.681 14:14:30 -- bdev/blockdev.sh@594 -- # STAT_PID=121926 00:12:38.681 Process Bdev IO statistics testing pid: 121926 00:12:38.681 14:14:30 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 121926' 00:12:38.681 14:14:30 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:12:38.681 14:14:30 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:12:38.681 14:14:30 -- bdev/blockdev.sh@597 -- # waitforlisten 121926 00:12:38.681 14:14:30 -- common/autotest_common.sh@829 -- # '[' -z 121926 ']' 00:12:38.681 14:14:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.681 14:14:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.681 14:14:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.681 14:14:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.681 14:14:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.681 [2024-11-18 14:14:30.689324] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:38.681 [2024-11-18 14:14:30.689590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121926 ] 00:12:38.940 [2024-11-18 14:14:30.844626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.940 [2024-11-18 14:14:30.921668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.940 [2024-11-18 14:14:30.921681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.508 14:14:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.508 14:14:31 -- common/autotest_common.sh@862 -- # return 0 00:12:39.508 14:14:31 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:12:39.508 14:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.508 14:14:31 -- common/autotest_common.sh@10 -- # set +x 00:12:39.767 Malloc_STAT 00:12:39.767 14:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.767 14:14:31 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:12:39.767 14:14:31 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:12:39.767 14:14:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:39.767 14:14:31 -- common/autotest_common.sh@899 -- # local i 00:12:39.767 14:14:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:39.767 14:14:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:39.767 14:14:31 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:39.767 14:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.767 14:14:31 -- common/autotest_common.sh@10 -- # set +x 00:12:39.767 14:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.767 14:14:31 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:12:39.767 14:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.767 14:14:31 -- common/autotest_common.sh@10 -- # set +x 00:12:39.767 [ 00:12:39.767 { 00:12:39.767 "name": "Malloc_STAT", 00:12:39.767 "aliases": [ 00:12:39.767 "e79e3d74-c4a9-4631-acda-39473623a74f" 00:12:39.767 ], 00:12:39.767 "product_name": "Malloc disk", 00:12:39.767 "block_size": 512, 00:12:39.767 "num_blocks": 262144, 00:12:39.767 "uuid": "e79e3d74-c4a9-4631-acda-39473623a74f", 00:12:39.767 "assigned_rate_limits": { 00:12:39.767 "rw_ios_per_sec": 0, 00:12:39.767 "rw_mbytes_per_sec": 0, 00:12:39.767 "r_mbytes_per_sec": 0, 00:12:39.767 "w_mbytes_per_sec": 0 00:12:39.767 }, 00:12:39.767 "claimed": false, 00:12:39.767 "zoned": false, 00:12:39.767 "supported_io_types": { 00:12:39.767 "read": true, 00:12:39.767 "write": true, 00:12:39.767 "unmap": true, 00:12:39.767 "write_zeroes": true, 00:12:39.767 "flush": true, 00:12:39.767 "reset": true, 00:12:39.767 "compare": false, 00:12:39.767 "compare_and_write": false, 00:12:39.767 "abort": true, 00:12:39.767 "nvme_admin": false, 00:12:39.767 "nvme_io": false 00:12:39.767 }, 00:12:39.767 "memory_domains": [ 00:12:39.767 { 00:12:39.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.767 "dma_device_type": 2 00:12:39.767 } 00:12:39.767 ], 00:12:39.767 "driver_specific": {} 00:12:39.767 } 00:12:39.767 ] 00:12:39.767 14:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.767 14:14:31 -- common/autotest_common.sh@905 -- # return 0 00:12:39.767 14:14:31 -- bdev/blockdev.sh@603 -- # sleep 2 00:12:39.767 14:14:31 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.767 Running I/O for 10 seconds... 00:12:41.672 14:14:33 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:12:41.672 14:14:33 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:12:41.672 14:14:33 -- bdev/blockdev.sh@558 -- # local iostats 00:12:41.672 14:14:33 -- bdev/blockdev.sh@559 -- # local io_count1 00:12:41.672 14:14:33 -- bdev/blockdev.sh@560 -- # local io_count2 00:12:41.672 14:14:33 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:12:41.672 14:14:33 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:12:41.672 14:14:33 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:12:41.672 14:14:33 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:12:41.672 14:14:33 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:41.672 14:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 14:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 14:14:33 -- bdev/blockdev.sh@566 -- # iostats='{ 00:12:41.672 "tick_rate": 2200000000, 00:12:41.672 "ticks": 1519484419636, 00:12:41.672 "bdevs": [ 00:12:41.672 { 00:12:41.672 "name": "Malloc_STAT", 00:12:41.672 "bytes_read": 1027641856, 00:12:41.672 "num_read_ops": 250883, 00:12:41.672 "bytes_written": 0, 00:12:41.672 "num_write_ops": 0, 00:12:41.672 "bytes_unmapped": 0, 00:12:41.672 "num_unmap_ops": 0, 00:12:41.672 "bytes_copied": 0, 00:12:41.672 "num_copy_ops": 0, 00:12:41.672 "read_latency_ticks": 2176328626036, 00:12:41.672 "max_read_latency_ticks": 11615866, 00:12:41.672 "min_read_latency_ticks": 381796, 00:12:41.672 "write_latency_ticks": 0, 00:12:41.672 "max_write_latency_ticks": 0, 00:12:41.672 "min_write_latency_ticks": 0, 00:12:41.672 "unmap_latency_ticks": 0, 00:12:41.672 "max_unmap_latency_ticks": 0, 00:12:41.672 "min_unmap_latency_ticks": 0, 00:12:41.672 "copy_latency_ticks": 0, 00:12:41.672 "max_copy_latency_ticks": 0, 00:12:41.672 "min_copy_latency_ticks": 0, 00:12:41.672 "io_error": {} 00:12:41.672 } 00:12:41.672 ] 00:12:41.672 }' 00:12:41.672 14:14:33 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:12:41.672 14:14:33 -- bdev/blockdev.sh@567 -- # io_count1=250883 00:12:41.672 14:14:33 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:12:41.672 14:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.672 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.672 14:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.672 14:14:33 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:12:41.672 "tick_rate": 2200000000, 00:12:41.672 "ticks": 1519635132452, 00:12:41.672 "name": "Malloc_STAT", 00:12:41.672 "channels": [ 00:12:41.672 { 00:12:41.672 "thread_id": 2, 00:12:41.672 "bytes_read": 527433728, 00:12:41.672 "num_read_ops": 128768, 00:12:41.672 "bytes_written": 0, 00:12:41.672 "num_write_ops": 0, 00:12:41.672 "bytes_unmapped": 0, 00:12:41.672 "num_unmap_ops": 0, 00:12:41.672 "bytes_copied": 0, 00:12:41.672 "num_copy_ops": 0, 00:12:41.672 "read_latency_ticks": 1126005766558, 00:12:41.672 "max_read_latency_ticks": 11615866, 00:12:41.672 "min_read_latency_ticks": 6187522, 00:12:41.672 "write_latency_ticks": 0, 00:12:41.672 "max_write_latency_ticks": 0, 00:12:41.672 "min_write_latency_ticks": 0, 00:12:41.672 "unmap_latency_ticks": 0, 00:12:41.672 "max_unmap_latency_ticks": 0, 00:12:41.672 "min_unmap_latency_ticks": 0, 00:12:41.672 "copy_latency_ticks": 0, 00:12:41.672 "max_copy_latency_ticks": 0, 00:12:41.672 "min_copy_latency_ticks": 0 00:12:41.672 }, 00:12:41.672 { 00:12:41.672 "thread_id": 3, 00:12:41.672 "bytes_read": 536870912, 00:12:41.672 "num_read_ops": 131072, 00:12:41.672 "bytes_written": 0, 00:12:41.672 "num_write_ops": 0, 00:12:41.672 "bytes_unmapped": 0, 00:12:41.672 "num_unmap_ops": 0, 00:12:41.672 "bytes_copied": 0, 00:12:41.672 "num_copy_ops": 0, 00:12:41.672 "read_latency_ticks": 1128253483326, 00:12:41.672 "max_read_latency_ticks": 11466628, 00:12:41.672 "min_read_latency_ticks": 5917812, 00:12:41.672 "write_latency_ticks": 0, 00:12:41.672 "max_write_latency_ticks": 0, 00:12:41.672 "min_write_latency_ticks": 0, 00:12:41.672 "unmap_latency_ticks": 0, 00:12:41.672 "max_unmap_latency_ticks": 0, 00:12:41.672 "min_unmap_latency_ticks": 0, 00:12:41.672 "copy_latency_ticks": 0, 00:12:41.672 "max_copy_latency_ticks": 0, 00:12:41.672 "min_copy_latency_ticks": 0 00:12:41.672 } 00:12:41.672 ] 00:12:41.672 }' 00:12:41.672 14:14:33 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:12:41.931 14:14:33 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=128768 00:12:41.931 14:14:33 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=128768 00:12:41.931 14:14:33 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:12:41.931 14:14:33 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=131072 00:12:41.931 14:14:33 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=259840 00:12:41.931 14:14:33 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:12:41.931 14:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.931 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.931 14:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.931 14:14:33 -- bdev/blockdev.sh@575 -- # iostats='{ 00:12:41.931 "tick_rate": 2200000000, 00:12:41.931 "ticks": 1519909838764, 00:12:41.931 "bdevs": [ 00:12:41.931 { 00:12:41.931 "name": "Malloc_STAT", 00:12:41.931 "bytes_read": 1130402304, 00:12:41.931 "num_read_ops": 275971, 00:12:41.931 "bytes_written": 0, 00:12:41.931 "num_write_ops": 0, 00:12:41.931 "bytes_unmapped": 0, 00:12:41.931 "num_unmap_ops": 0, 00:12:41.931 "bytes_copied": 0, 00:12:41.931 "num_copy_ops": 0, 00:12:41.931 "read_latency_ticks": 2394302947642, 00:12:41.931 "max_read_latency_ticks": 11615866, 00:12:41.931 "min_read_latency_ticks": 381796, 00:12:41.931 "write_latency_ticks": 0, 00:12:41.931 "max_write_latency_ticks": 0, 00:12:41.931 "min_write_latency_ticks": 0, 00:12:41.931 "unmap_latency_ticks": 0, 00:12:41.931 "max_unmap_latency_ticks": 0, 00:12:41.931 "min_unmap_latency_ticks": 0, 00:12:41.931 "copy_latency_ticks": 0, 00:12:41.931 "max_copy_latency_ticks": 0, 00:12:41.931 "min_copy_latency_ticks": 0, 00:12:41.931 "io_error": {} 00:12:41.931 } 00:12:41.931 ] 00:12:41.931 }' 00:12:41.931 14:14:33 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:12:41.931 14:14:33 -- bdev/blockdev.sh@576 -- # io_count2=275971 00:12:41.931 14:14:33 -- bdev/blockdev.sh@581 -- # '[' 259840 -lt 250883 ']' 00:12:41.931 14:14:33 -- bdev/blockdev.sh@581 -- # '[' 259840 -gt 275971 ']' 00:12:41.931 14:14:33 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:12:41.931 14:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.931 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.931 00:12:41.931 Latency(us) 00:12:41.931 [2024-11-18T14:14:34.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.931 [2024-11-18T14:14:34.005Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:12:41.931 Malloc_STAT : 2.19 64221.54 250.87 0.00 0.00 3977.27 953.25 5302.46 00:12:41.931 [2024-11-18T14:14:34.005Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:12:41.931 Malloc_STAT : 2.19 65440.84 255.63 0.00 0.00 3903.65 703.77 5213.09 00:12:41.931 [2024-11-18T14:14:34.005Z] =================================================================================================================== 00:12:41.931 [2024-11-18T14:14:34.005Z] Total : 129662.37 506.49 0.00 0.00 3940.09 703.77 5302.46 00:12:41.931 0 00:12:41.931 14:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.931 14:14:33 -- bdev/blockdev.sh@607 -- # killprocess 121926 00:12:41.931 14:14:33 -- common/autotest_common.sh@936 -- # '[' -z 121926 ']' 00:12:41.931 14:14:33 -- common/autotest_common.sh@940 -- # kill -0 121926 00:12:41.931 14:14:33 -- common/autotest_common.sh@941 -- # uname 00:12:41.931 14:14:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:41.931 14:14:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121926 00:12:41.931 14:14:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:41.931 14:14:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:41.931 killing process with pid 121926 00:12:41.931 14:14:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121926' 00:12:41.931 14:14:33 -- common/autotest_common.sh@955 -- # kill 121926 00:12:41.931 Received shutdown signal, test time was about 2.250844 seconds 00:12:41.931 00:12:41.931 Latency(us) 00:12:41.931 [2024-11-18T14:14:34.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.931 [2024-11-18T14:14:34.005Z] =================================================================================================================== 00:12:41.931 [2024-11-18T14:14:34.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:41.931 14:14:33 -- common/autotest_common.sh@960 -- # wait 121926 00:12:42.499 14:14:34 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:12:42.499 00:12:42.499 real 0m3.671s 00:12:42.499 user 0m7.112s 00:12:42.499 sys 0m0.387s 00:12:42.499 14:14:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.499 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:12:42.499 ************************************ 00:12:42.499 END TEST bdev_stat 00:12:42.499 ************************************ 00:12:42.499 14:14:34 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:12:42.499 14:14:34 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:12:42.499 14:14:34 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:42.499 14:14:34 -- bdev/blockdev.sh@809 -- # cleanup 00:12:42.499 14:14:34 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:42.499 14:14:34 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:42.499 14:14:34 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:12:42.499 14:14:34 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:12:42.499 14:14:34 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:12:42.499 14:14:34 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:12:42.499 ************************************ 00:12:42.499 END TEST blockdev_general 00:12:42.499 ************************************ 00:12:42.499 00:12:42.499 real 1m54.359s 00:12:42.499 user 5m11.568s 00:12:42.499 sys 0m18.804s 00:12:42.499 14:14:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:42.499 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:12:42.499 14:14:34 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:42.499 14:14:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:42.499 14:14:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.499 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:12:42.499 ************************************ 00:12:42.499 START TEST bdev_raid 00:12:42.499 ************************************ 00:12:42.499 14:14:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:12:42.499 * Looking for test storage... 00:12:42.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:42.499 14:14:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:42.499 14:14:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:42.499 14:14:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:42.759 14:14:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:42.759 14:14:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:42.759 14:14:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:42.759 14:14:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:42.759 14:14:34 -- scripts/common.sh@335 -- # IFS=.-: 00:12:42.759 14:14:34 -- scripts/common.sh@335 -- # read -ra ver1 00:12:42.759 14:14:34 -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.759 14:14:34 -- scripts/common.sh@336 -- # read -ra ver2 00:12:42.759 14:14:34 -- scripts/common.sh@337 -- # local 'op=<' 00:12:42.759 14:14:34 -- scripts/common.sh@339 -- # ver1_l=2 00:12:42.759 14:14:34 -- scripts/common.sh@340 -- # ver2_l=1 00:12:42.759 14:14:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:42.759 14:14:34 -- scripts/common.sh@343 -- # case "$op" in 00:12:42.759 14:14:34 -- scripts/common.sh@344 -- # : 1 00:12:42.759 14:14:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:42.759 14:14:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.759 14:14:34 -- scripts/common.sh@364 -- # decimal 1 00:12:42.759 14:14:34 -- scripts/common.sh@352 -- # local d=1 00:12:42.759 14:14:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.759 14:14:34 -- scripts/common.sh@354 -- # echo 1 00:12:42.759 14:14:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:42.759 14:14:34 -- scripts/common.sh@365 -- # decimal 2 00:12:42.759 14:14:34 -- scripts/common.sh@352 -- # local d=2 00:12:42.759 14:14:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.759 14:14:34 -- scripts/common.sh@354 -- # echo 2 00:12:42.759 14:14:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:42.759 14:14:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:42.759 14:14:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:42.759 14:14:34 -- scripts/common.sh@367 -- # return 0 00:12:42.759 14:14:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.759 14:14:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.759 --rc genhtml_branch_coverage=1 00:12:42.759 --rc genhtml_function_coverage=1 00:12:42.759 --rc genhtml_legend=1 00:12:42.759 --rc geninfo_all_blocks=1 00:12:42.759 --rc geninfo_unexecuted_blocks=1 00:12:42.759 00:12:42.759 ' 00:12:42.759 14:14:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.759 --rc genhtml_branch_coverage=1 00:12:42.759 --rc genhtml_function_coverage=1 00:12:42.759 --rc genhtml_legend=1 00:12:42.759 --rc geninfo_all_blocks=1 00:12:42.759 --rc geninfo_unexecuted_blocks=1 00:12:42.759 00:12:42.759 ' 00:12:42.759 14:14:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.759 --rc genhtml_branch_coverage=1 00:12:42.759 --rc genhtml_function_coverage=1 00:12:42.759 --rc genhtml_legend=1 00:12:42.759 --rc geninfo_all_blocks=1 00:12:42.759 --rc geninfo_unexecuted_blocks=1 00:12:42.759 00:12:42.759 ' 00:12:42.759 14:14:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:42.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.759 --rc genhtml_branch_coverage=1 00:12:42.759 --rc genhtml_function_coverage=1 00:12:42.759 --rc genhtml_legend=1 00:12:42.759 --rc geninfo_all_blocks=1 00:12:42.759 --rc geninfo_unexecuted_blocks=1 00:12:42.759 00:12:42.759 ' 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:42.759 14:14:34 -- bdev/nbd_common.sh@6 -- # set -e 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@716 -- # uname -s 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:12:42.759 14:14:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:42.759 14:14:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:42.759 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:12:42.759 ************************************ 00:12:42.759 START TEST raid_function_test_raid0 00:12:42.759 ************************************ 00:12:42.759 14:14:34 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@86 -- # raid_pid=122075 00:12:42.759 Process raid pid: 122075 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122075' 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122075 /var/tmp/spdk-raid.sock 00:12:42.759 14:14:34 -- common/autotest_common.sh@829 -- # '[' -z 122075 ']' 00:12:42.759 14:14:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:42.759 14:14:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:42.759 14:14:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:42.759 14:14:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.759 14:14:34 -- common/autotest_common.sh@10 -- # set +x 00:12:42.759 14:14:34 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:42.759 [2024-11-18 14:14:34.683847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:42.759 [2024-11-18 14:14:34.684093] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.018 [2024-11-18 14:14:34.831575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.018 [2024-11-18 14:14:34.906991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.018 [2024-11-18 14:14:34.976582] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.585 14:14:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.585 14:14:35 -- common/autotest_common.sh@862 -- # return 0 00:12:43.585 14:14:35 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:12:43.585 14:14:35 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:12:43.585 14:14:35 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:12:43.585 14:14:35 -- bdev/bdev_raid.sh@70 -- # cat 00:12:43.585 14:14:35 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:12:43.844 [2024-11-18 14:14:35.879043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:43.844 [2024-11-18 14:14:35.881914] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:43.844 [2024-11-18 14:14:35.881981] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:12:43.844 [2024-11-18 14:14:35.881993] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:43.844 [2024-11-18 14:14:35.882119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:12:43.844 [2024-11-18 14:14:35.882524] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:12:43.844 [2024-11-18 14:14:35.882544] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:12:43.844 [2024-11-18 14:14:35.882708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.844 Base_1 00:12:43.844 Base_2 00:12:43.844 14:14:35 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:12:43.844 14:14:35 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:43.844 14:14:35 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:12:44.103 14:14:36 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:12:44.103 14:14:36 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:12:44.103 14:14:36 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@12 -- # local i 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.103 14:14:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:12:44.363 [2024-11-18 14:14:36.327548] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:12:44.363 /dev/nbd0 00:12:44.363 14:14:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.363 14:14:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.363 14:14:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:44.363 14:14:36 -- common/autotest_common.sh@867 -- # local i 00:12:44.363 14:14:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:44.363 14:14:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:44.363 14:14:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:44.363 14:14:36 -- common/autotest_common.sh@871 -- # break 00:12:44.363 14:14:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:44.363 14:14:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:44.363 14:14:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.363 1+0 records in 00:12:44.363 1+0 records out 00:12:44.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224597 s, 18.2 MB/s 00:12:44.363 14:14:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.363 14:14:36 -- common/autotest_common.sh@884 -- # size=4096 00:12:44.363 14:14:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.363 14:14:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:44.363 14:14:36 -- common/autotest_common.sh@887 -- # return 0 00:12:44.363 14:14:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.363 14:14:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:44.363 14:14:36 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:44.363 14:14:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:44.363 14:14:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:44.622 { 00:12:44.622 "nbd_device": "/dev/nbd0", 00:12:44.622 "bdev_name": "raid" 00:12:44.622 } 00:12:44.622 ]' 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:44.622 { 00:12:44.622 "nbd_device": "/dev/nbd0", 00:12:44.622 "bdev_name": "raid" 00:12:44.622 } 00:12:44.622 ]' 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@65 -- # count=1 00:12:44.622 14:14:36 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@98 -- # count=1 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@20 -- # local blksize 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:12:44.622 14:14:36 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:12:44.882 4096+0 records in 00:12:44.882 4096+0 records out 00:12:44.882 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0285443 s, 73.5 MB/s 00:12:44.882 14:14:36 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:45.142 4096+0 records in 00:12:45.142 4096+0 records out 00:12:45.142 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.266483 s, 7.9 MB/s 00:12:45.142 14:14:36 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:12:45.142 14:14:36 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:45.142 128+0 records in 00:12:45.142 128+0 records out 00:12:45.142 65536 bytes (66 kB, 64 KiB) copied, 0.000655231 s, 100 MB/s 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:45.142 2035+0 records in 00:12:45.142 2035+0 records out 00:12:45.142 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00585358 s, 178 MB/s 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:45.142 456+0 records in 00:12:45.142 456+0 records out 00:12:45.142 233472 bytes (233 kB, 228 KiB) copied, 0.00213586 s, 109 MB/s 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@53 -- # return 0 00:12:45.142 14:14:37 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:12:45.142 14:14:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:45.142 14:14:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:45.142 14:14:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.142 14:14:37 -- bdev/nbd_common.sh@51 -- # local i 00:12:45.142 14:14:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.142 14:14:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:12:45.401 [2024-11-18 14:14:37.360274] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@41 -- # break 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.401 14:14:37 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:45.401 14:14:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@65 -- # true 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@65 -- # count=0 00:12:45.661 14:14:37 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:45.661 14:14:37 -- bdev/bdev_raid.sh@106 -- # count=0 00:12:45.661 14:14:37 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:12:45.661 14:14:37 -- bdev/bdev_raid.sh@111 -- # killprocess 122075 00:12:45.661 14:14:37 -- common/autotest_common.sh@936 -- # '[' -z 122075 ']' 00:12:45.661 14:14:37 -- common/autotest_common.sh@940 -- # kill -0 122075 00:12:45.661 14:14:37 -- common/autotest_common.sh@941 -- # uname 00:12:45.661 14:14:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.661 14:14:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122075 00:12:45.661 14:14:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:45.661 14:14:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:45.661 killing process with pid 122075 00:12:45.661 14:14:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122075' 00:12:45.661 14:14:37 -- common/autotest_common.sh@955 -- # kill 122075 00:12:45.661 [2024-11-18 14:14:37.705863] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.661 14:14:37 -- common/autotest_common.sh@960 -- # wait 122075 00:12:45.661 [2024-11-18 14:14:37.705980] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.661 [2024-11-18 14:14:37.706044] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.661 [2024-11-18 14:14:37.706057] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:12:45.661 [2024-11-18 14:14:37.726012] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.920 14:14:37 -- bdev/bdev_raid.sh@113 -- # return 0 00:12:45.920 00:12:45.920 real 0m3.321s 00:12:45.920 user 0m4.531s 00:12:45.920 sys 0m0.912s 00:12:45.920 14:14:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.920 14:14:37 -- common/autotest_common.sh@10 -- # set +x 00:12:45.920 ************************************ 00:12:45.920 END TEST raid_function_test_raid0 00:12:45.920 ************************************ 00:12:46.181 14:14:37 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:12:46.181 14:14:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:46.181 14:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.181 14:14:37 -- common/autotest_common.sh@10 -- # set +x 00:12:46.181 ************************************ 00:12:46.181 START TEST raid_function_test_concat 00:12:46.181 ************************************ 00:12:46.181 14:14:38 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@86 -- # raid_pid=122219 00:12:46.181 Process raid pid: 122219 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122219' 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122219 /var/tmp/spdk-raid.sock 00:12:46.181 14:14:38 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:46.181 14:14:38 -- common/autotest_common.sh@829 -- # '[' -z 122219 ']' 00:12:46.181 14:14:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:46.181 14:14:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:46.181 14:14:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:46.181 14:14:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.181 14:14:38 -- common/autotest_common.sh@10 -- # set +x 00:12:46.181 [2024-11-18 14:14:38.060644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:46.181 [2024-11-18 14:14:38.060875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.181 [2024-11-18 14:14:38.208773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.458 [2024-11-18 14:14:38.296517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.458 [2024-11-18 14:14:38.371583] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.036 14:14:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.036 14:14:38 -- common/autotest_common.sh@862 -- # return 0 00:12:47.036 14:14:38 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:12:47.036 14:14:38 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:12:47.036 14:14:38 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:12:47.036 14:14:38 -- bdev/bdev_raid.sh@70 -- # cat 00:12:47.036 14:14:38 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:12:47.294 [2024-11-18 14:14:39.257726] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:47.294 [2024-11-18 14:14:39.260033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:47.294 [2024-11-18 14:14:39.260302] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:12:47.294 [2024-11-18 14:14:39.260463] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:47.294 [2024-11-18 14:14:39.260683] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:12:47.294 [2024-11-18 14:14:39.261269] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:12:47.294 [2024-11-18 14:14:39.261438] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:12:47.294 [2024-11-18 14:14:39.261787] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.294 Base_1 00:12:47.294 Base_2 00:12:47.294 14:14:39 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:12:47.294 14:14:39 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:47.294 14:14:39 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:12:47.553 14:14:39 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:12:47.553 14:14:39 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:12:47.553 14:14:39 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@12 -- # local i 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.553 14:14:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:12:47.812 [2024-11-18 14:14:39.734291] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:12:47.812 /dev/nbd0 00:12:47.812 14:14:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:47.812 14:14:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:47.812 14:14:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:47.812 14:14:39 -- common/autotest_common.sh@867 -- # local i 00:12:47.812 14:14:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.812 14:14:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.812 14:14:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:47.812 14:14:39 -- common/autotest_common.sh@871 -- # break 00:12:47.812 14:14:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.812 14:14:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.812 14:14:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.812 1+0 records in 00:12:47.812 1+0 records out 00:12:47.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055793 s, 7.3 MB/s 00:12:47.812 14:14:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.812 14:14:39 -- common/autotest_common.sh@884 -- # size=4096 00:12:47.812 14:14:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.812 14:14:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.812 14:14:39 -- common/autotest_common.sh@887 -- # return 0 00:12:47.812 14:14:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.812 14:14:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:47.812 14:14:39 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:47.812 14:14:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:47.812 14:14:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:48.070 14:14:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:48.070 { 00:12:48.070 "nbd_device": "/dev/nbd0", 00:12:48.071 "bdev_name": "raid" 00:12:48.071 } 00:12:48.071 ]' 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:48.071 { 00:12:48.071 "nbd_device": "/dev/nbd0", 00:12:48.071 "bdev_name": "raid" 00:12:48.071 } 00:12:48.071 ]' 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@65 -- # count=1 00:12:48.071 14:14:39 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@98 -- # count=1 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@20 -- # local blksize 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:12:48.071 4096+0 records in 00:12:48.071 4096+0 records out 00:12:48.071 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0262733 s, 79.8 MB/s 00:12:48.071 14:14:40 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:48.329 4096+0 records in 00:12:48.329 4096+0 records out 00:12:48.329 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.268796 s, 7.8 MB/s 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:48.329 128+0 records in 00:12:48.329 128+0 records out 00:12:48.329 65536 bytes (66 kB, 64 KiB) copied, 0.000568788 s, 115 MB/s 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:48.329 2035+0 records in 00:12:48.329 2035+0 records out 00:12:48.329 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00669021 s, 156 MB/s 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:48.329 456+0 records in 00:12:48.329 456+0 records out 00:12:48.329 233472 bytes (233 kB, 228 KiB) copied, 0.0016699 s, 140 MB/s 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@53 -- # return 0 00:12:48.329 14:14:40 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:12:48.329 14:14:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:48.329 14:14:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:48.329 14:14:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.329 14:14:40 -- bdev/nbd_common.sh@51 -- # local i 00:12:48.329 14:14:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.329 14:14:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:12:48.588 14:14:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.588 [2024-11-18 14:14:40.660132] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.588 14:14:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.588 14:14:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.588 14:14:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.588 14:14:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.588 14:14:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@41 -- # break 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.846 14:14:40 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@65 -- # true 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@65 -- # count=0 00:12:48.846 14:14:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:48.846 14:14:40 -- bdev/bdev_raid.sh@106 -- # count=0 00:12:48.846 14:14:40 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:12:48.846 14:14:40 -- bdev/bdev_raid.sh@111 -- # killprocess 122219 00:12:48.846 14:14:40 -- common/autotest_common.sh@936 -- # '[' -z 122219 ']' 00:12:48.846 14:14:40 -- common/autotest_common.sh@940 -- # kill -0 122219 00:12:48.846 14:14:40 -- common/autotest_common.sh@941 -- # uname 00:12:48.846 14:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.846 14:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122219 00:12:49.104 14:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:49.104 14:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:49.104 14:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122219' 00:12:49.104 killing process with pid 122219 00:12:49.104 14:14:40 -- common/autotest_common.sh@955 -- # kill 122219 00:12:49.105 14:14:40 -- common/autotest_common.sh@960 -- # wait 122219 00:12:49.105 [2024-11-18 14:14:40.936827] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:49.105 [2024-11-18 14:14:40.937125] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.105 [2024-11-18 14:14:40.937365] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.105 [2024-11-18 14:14:40.937489] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:12:49.105 [2024-11-18 14:14:40.963756] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@113 -- # return 0 00:12:49.363 00:12:49.363 real 0m3.246s 00:12:49.363 user 0m4.275s 00:12:49.363 sys 0m0.956s 00:12:49.363 14:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:49.363 ************************************ 00:12:49.363 END TEST raid_function_test_concat 00:12:49.363 14:14:41 -- common/autotest_common.sh@10 -- # set +x 00:12:49.363 ************************************ 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:12:49.363 14:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:49.363 14:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.363 14:14:41 -- common/autotest_common.sh@10 -- # set +x 00:12:49.363 ************************************ 00:12:49.363 START TEST raid0_resize_test 00:12:49.363 ************************************ 00:12:49.363 14:14:41 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@301 -- # raid_pid=122364 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 122364' 00:12:49.363 Process raid pid: 122364 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@303 -- # waitforlisten 122364 /var/tmp/spdk-raid.sock 00:12:49.363 14:14:41 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:49.363 14:14:41 -- common/autotest_common.sh@829 -- # '[' -z 122364 ']' 00:12:49.363 14:14:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:49.363 14:14:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:49.363 14:14:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:49.363 14:14:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.363 14:14:41 -- common/autotest_common.sh@10 -- # set +x 00:12:49.363 [2024-11-18 14:14:41.372975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:49.363 [2024-11-18 14:14:41.373217] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.622 [2024-11-18 14:14:41.520018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.622 [2024-11-18 14:14:41.589720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.622 [2024-11-18 14:14:41.659848] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.558 14:14:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.558 14:14:42 -- common/autotest_common.sh@862 -- # return 0 00:12:50.558 14:14:42 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:12:50.558 Base_1 00:12:50.558 14:14:42 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:12:50.817 Base_2 00:12:50.817 14:14:42 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:12:51.075 [2024-11-18 14:14:43.005927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:51.075 [2024-11-18 14:14:43.008013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:51.075 [2024-11-18 14:14:43.008081] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:12:51.075 [2024-11-18 14:14:43.008095] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:51.075 [2024-11-18 14:14:43.008247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:12:51.075 [2024-11-18 14:14:43.008733] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:12:51.075 [2024-11-18 14:14:43.008758] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:12:51.075 [2024-11-18 14:14:43.008960] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.075 14:14:43 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:12:51.334 [2024-11-18 14:14:43.246194] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:51.334 [2024-11-18 14:14:43.246225] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:51.334 true 00:12:51.334 14:14:43 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:12:51.334 14:14:43 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:51.593 [2024-11-18 14:14:43.442327] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.593 14:14:43 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:12:51.593 14:14:43 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:12:51.593 14:14:43 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:12:51.593 14:14:43 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:12:51.593 [2024-11-18 14:14:43.634198] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:51.593 [2024-11-18 14:14:43.634230] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:51.593 [2024-11-18 14:14:43.634272] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:12:51.593 [2024-11-18 14:14:43.634344] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:51.593 true 00:12:51.593 14:14:43 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:12:51.593 14:14:43 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:12:51.852 [2024-11-18 14:14:43.838337] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.852 14:14:43 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:12:51.852 14:14:43 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:12:51.852 14:14:43 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:12:51.852 14:14:43 -- bdev/bdev_raid.sh@332 -- # killprocess 122364 00:12:51.852 14:14:43 -- common/autotest_common.sh@936 -- # '[' -z 122364 ']' 00:12:51.852 14:14:43 -- common/autotest_common.sh@940 -- # kill -0 122364 00:12:51.852 14:14:43 -- common/autotest_common.sh@941 -- # uname 00:12:51.852 14:14:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.852 14:14:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122364 00:12:51.852 14:14:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:51.852 14:14:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:51.852 killing process with pid 122364 00:12:51.852 14:14:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122364' 00:12:51.852 14:14:43 -- common/autotest_common.sh@955 -- # kill 122364 00:12:51.852 14:14:43 -- common/autotest_common.sh@960 -- # wait 122364 00:12:51.852 [2024-11-18 14:14:43.876246] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.852 [2024-11-18 14:14:43.876344] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.852 [2024-11-18 14:14:43.876420] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.852 [2024-11-18 14:14:43.876444] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:12:51.852 [2024-11-18 14:14:43.876956] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.110 14:14:44 -- bdev/bdev_raid.sh@334 -- # return 0 00:12:52.110 00:12:52.110 real 0m2.848s 00:12:52.110 user 0m4.365s 00:12:52.110 sys 0m0.457s 00:12:52.110 ************************************ 00:12:52.110 END TEST raid0_resize_test 00:12:52.110 ************************************ 00:12:52.110 14:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.110 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:52.367 14:14:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:52.367 14:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.367 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.367 ************************************ 00:12:52.367 START TEST raid_state_function_test 00:12:52.367 ************************************ 00:12:52.367 14:14:44 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:52.367 14:14:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=122446 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122446' 00:12:52.368 Process raid pid: 122446 00:12:52.368 14:14:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122446 /var/tmp/spdk-raid.sock 00:12:52.368 14:14:44 -- common/autotest_common.sh@829 -- # '[' -z 122446 ']' 00:12:52.368 14:14:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:52.368 14:14:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.368 14:14:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:52.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:52.368 14:14:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.368 14:14:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.368 [2024-11-18 14:14:44.284709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:52.368 [2024-11-18 14:14:44.285136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.368 [2024-11-18 14:14:44.423921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.626 [2024-11-18 14:14:44.493486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.626 [2024-11-18 14:14:44.564419] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.193 14:14:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.193 14:14:45 -- common/autotest_common.sh@862 -- # return 0 00:12:53.193 14:14:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:53.451 [2024-11-18 14:14:45.466739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.451 [2024-11-18 14:14:45.467104] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.451 [2024-11-18 14:14:45.467239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.451 [2024-11-18 14:14:45.467391] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.451 14:14:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.709 14:14:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:53.709 "name": "Existed_Raid", 00:12:53.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.709 "strip_size_kb": 64, 00:12:53.709 "state": "configuring", 00:12:53.709 "raid_level": "raid0", 00:12:53.709 "superblock": false, 00:12:53.709 "num_base_bdevs": 2, 00:12:53.709 "num_base_bdevs_discovered": 0, 00:12:53.709 "num_base_bdevs_operational": 2, 00:12:53.709 "base_bdevs_list": [ 00:12:53.709 { 00:12:53.709 "name": "BaseBdev1", 00:12:53.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.709 "is_configured": false, 00:12:53.709 "data_offset": 0, 00:12:53.709 "data_size": 0 00:12:53.709 }, 00:12:53.709 { 00:12:53.709 "name": "BaseBdev2", 00:12:53.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.709 "is_configured": false, 00:12:53.709 "data_offset": 0, 00:12:53.709 "data_size": 0 00:12:53.709 } 00:12:53.709 ] 00:12:53.709 }' 00:12:53.709 14:14:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:53.709 14:14:45 -- common/autotest_common.sh@10 -- # set +x 00:12:54.275 14:14:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:54.534 [2024-11-18 14:14:46.430767] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.534 [2024-11-18 14:14:46.430964] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:12:54.534 14:14:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:54.792 [2024-11-18 14:14:46.678817] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.792 [2024-11-18 14:14:46.679031] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.792 [2024-11-18 14:14:46.679150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.792 [2024-11-18 14:14:46.679243] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.792 14:14:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.051 [2024-11-18 14:14:46.880844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.051 BaseBdev1 00:12:55.051 14:14:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:55.051 14:14:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:55.051 14:14:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:55.051 14:14:46 -- common/autotest_common.sh@899 -- # local i 00:12:55.051 14:14:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:55.051 14:14:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:55.051 14:14:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:55.051 14:14:47 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.309 [ 00:12:55.309 { 00:12:55.309 "name": "BaseBdev1", 00:12:55.309 "aliases": [ 00:12:55.309 "f6c40950-92fe-412b-9f13-145201248fa6" 00:12:55.309 ], 00:12:55.309 "product_name": "Malloc disk", 00:12:55.309 "block_size": 512, 00:12:55.309 "num_blocks": 65536, 00:12:55.309 "uuid": "f6c40950-92fe-412b-9f13-145201248fa6", 00:12:55.309 "assigned_rate_limits": { 00:12:55.309 "rw_ios_per_sec": 0, 00:12:55.309 "rw_mbytes_per_sec": 0, 00:12:55.309 "r_mbytes_per_sec": 0, 00:12:55.309 "w_mbytes_per_sec": 0 00:12:55.309 }, 00:12:55.309 "claimed": true, 00:12:55.309 "claim_type": "exclusive_write", 00:12:55.309 "zoned": false, 00:12:55.309 "supported_io_types": { 00:12:55.309 "read": true, 00:12:55.309 "write": true, 00:12:55.309 "unmap": true, 00:12:55.309 "write_zeroes": true, 00:12:55.309 "flush": true, 00:12:55.309 "reset": true, 00:12:55.309 "compare": false, 00:12:55.309 "compare_and_write": false, 00:12:55.309 "abort": true, 00:12:55.309 "nvme_admin": false, 00:12:55.309 "nvme_io": false 00:12:55.309 }, 00:12:55.309 "memory_domains": [ 00:12:55.309 { 00:12:55.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.309 "dma_device_type": 2 00:12:55.309 } 00:12:55.309 ], 00:12:55.309 "driver_specific": {} 00:12:55.309 } 00:12:55.309 ] 00:12:55.309 14:14:47 -- common/autotest_common.sh@905 -- # return 0 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.309 14:14:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.567 14:14:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:55.567 "name": "Existed_Raid", 00:12:55.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.567 "strip_size_kb": 64, 00:12:55.567 "state": "configuring", 00:12:55.567 "raid_level": "raid0", 00:12:55.567 "superblock": false, 00:12:55.567 "num_base_bdevs": 2, 00:12:55.567 "num_base_bdevs_discovered": 1, 00:12:55.567 "num_base_bdevs_operational": 2, 00:12:55.567 "base_bdevs_list": [ 00:12:55.567 { 00:12:55.567 "name": "BaseBdev1", 00:12:55.567 "uuid": "f6c40950-92fe-412b-9f13-145201248fa6", 00:12:55.567 "is_configured": true, 00:12:55.567 "data_offset": 0, 00:12:55.567 "data_size": 65536 00:12:55.567 }, 00:12:55.567 { 00:12:55.567 "name": "BaseBdev2", 00:12:55.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.567 "is_configured": false, 00:12:55.567 "data_offset": 0, 00:12:55.567 "data_size": 0 00:12:55.567 } 00:12:55.567 ] 00:12:55.567 }' 00:12:55.567 14:14:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:55.567 14:14:47 -- common/autotest_common.sh@10 -- # set +x 00:12:56.134 14:14:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:56.393 [2024-11-18 14:14:48.333088] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.393 [2024-11-18 14:14:48.333282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:12:56.393 14:14:48 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:56.393 14:14:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:56.652 [2024-11-18 14:14:48.525186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.652 [2024-11-18 14:14:48.527333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.652 [2024-11-18 14:14:48.527524] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.652 14:14:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.911 14:14:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:56.911 "name": "Existed_Raid", 00:12:56.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.911 "strip_size_kb": 64, 00:12:56.911 "state": "configuring", 00:12:56.911 "raid_level": "raid0", 00:12:56.911 "superblock": false, 00:12:56.911 "num_base_bdevs": 2, 00:12:56.911 "num_base_bdevs_discovered": 1, 00:12:56.911 "num_base_bdevs_operational": 2, 00:12:56.911 "base_bdevs_list": [ 00:12:56.911 { 00:12:56.911 "name": "BaseBdev1", 00:12:56.911 "uuid": "f6c40950-92fe-412b-9f13-145201248fa6", 00:12:56.911 "is_configured": true, 00:12:56.911 "data_offset": 0, 00:12:56.911 "data_size": 65536 00:12:56.911 }, 00:12:56.911 { 00:12:56.911 "name": "BaseBdev2", 00:12:56.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.911 "is_configured": false, 00:12:56.911 "data_offset": 0, 00:12:56.911 "data_size": 0 00:12:56.911 } 00:12:56.911 ] 00:12:56.911 }' 00:12:56.911 14:14:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:56.911 14:14:48 -- common/autotest_common.sh@10 -- # set +x 00:12:57.478 14:14:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:57.736 [2024-11-18 14:14:49.676298] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.736 [2024-11-18 14:14:49.676558] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:12:57.736 [2024-11-18 14:14:49.676637] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:57.736 [2024-11-18 14:14:49.677066] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:12:57.736 [2024-11-18 14:14:49.677837] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:12:57.736 [2024-11-18 14:14:49.678002] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:12:57.736 [2024-11-18 14:14:49.678356] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.736 BaseBdev2 00:12:57.736 14:14:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:57.736 14:14:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:57.736 14:14:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:57.736 14:14:49 -- common/autotest_common.sh@899 -- # local i 00:12:57.736 14:14:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:57.736 14:14:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:57.736 14:14:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:57.995 14:14:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.253 [ 00:12:58.253 { 00:12:58.253 "name": "BaseBdev2", 00:12:58.253 "aliases": [ 00:12:58.253 "1e7c31e9-b634-4266-99c1-c670420f34cc" 00:12:58.253 ], 00:12:58.253 "product_name": "Malloc disk", 00:12:58.253 "block_size": 512, 00:12:58.253 "num_blocks": 65536, 00:12:58.253 "uuid": "1e7c31e9-b634-4266-99c1-c670420f34cc", 00:12:58.253 "assigned_rate_limits": { 00:12:58.253 "rw_ios_per_sec": 0, 00:12:58.253 "rw_mbytes_per_sec": 0, 00:12:58.253 "r_mbytes_per_sec": 0, 00:12:58.253 "w_mbytes_per_sec": 0 00:12:58.253 }, 00:12:58.253 "claimed": true, 00:12:58.253 "claim_type": "exclusive_write", 00:12:58.253 "zoned": false, 00:12:58.253 "supported_io_types": { 00:12:58.253 "read": true, 00:12:58.253 "write": true, 00:12:58.253 "unmap": true, 00:12:58.253 "write_zeroes": true, 00:12:58.253 "flush": true, 00:12:58.253 "reset": true, 00:12:58.253 "compare": false, 00:12:58.253 "compare_and_write": false, 00:12:58.253 "abort": true, 00:12:58.253 "nvme_admin": false, 00:12:58.253 "nvme_io": false 00:12:58.253 }, 00:12:58.253 "memory_domains": [ 00:12:58.253 { 00:12:58.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.253 "dma_device_type": 2 00:12:58.253 } 00:12:58.253 ], 00:12:58.253 "driver_specific": {} 00:12:58.253 } 00:12:58.253 ] 00:12:58.253 14:14:50 -- common/autotest_common.sh@905 -- # return 0 00:12:58.253 14:14:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:58.253 14:14:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:58.253 14:14:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:58.253 14:14:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:58.253 14:14:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:58.253 14:14:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.254 14:14:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.512 14:14:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:58.512 "name": "Existed_Raid", 00:12:58.512 "uuid": "5b429596-65a4-4ce8-83db-8f946dbc4c24", 00:12:58.512 "strip_size_kb": 64, 00:12:58.512 "state": "online", 00:12:58.512 "raid_level": "raid0", 00:12:58.512 "superblock": false, 00:12:58.512 "num_base_bdevs": 2, 00:12:58.512 "num_base_bdevs_discovered": 2, 00:12:58.512 "num_base_bdevs_operational": 2, 00:12:58.512 "base_bdevs_list": [ 00:12:58.512 { 00:12:58.512 "name": "BaseBdev1", 00:12:58.512 "uuid": "f6c40950-92fe-412b-9f13-145201248fa6", 00:12:58.512 "is_configured": true, 00:12:58.512 "data_offset": 0, 00:12:58.512 "data_size": 65536 00:12:58.512 }, 00:12:58.512 { 00:12:58.512 "name": "BaseBdev2", 00:12:58.512 "uuid": "1e7c31e9-b634-4266-99c1-c670420f34cc", 00:12:58.512 "is_configured": true, 00:12:58.512 "data_offset": 0, 00:12:58.512 "data_size": 65536 00:12:58.512 } 00:12:58.512 ] 00:12:58.512 }' 00:12:58.512 14:14:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:58.512 14:14:50 -- common/autotest_common.sh@10 -- # set +x 00:12:59.078 14:14:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:59.336 [2024-11-18 14:14:51.223752] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.336 [2024-11-18 14:14:51.223913] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.336 [2024-11-18 14:14:51.224102] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.336 14:14:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.595 14:14:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:59.595 "name": "Existed_Raid", 00:12:59.595 "uuid": "5b429596-65a4-4ce8-83db-8f946dbc4c24", 00:12:59.595 "strip_size_kb": 64, 00:12:59.595 "state": "offline", 00:12:59.595 "raid_level": "raid0", 00:12:59.595 "superblock": false, 00:12:59.595 "num_base_bdevs": 2, 00:12:59.595 "num_base_bdevs_discovered": 1, 00:12:59.595 "num_base_bdevs_operational": 1, 00:12:59.595 "base_bdevs_list": [ 00:12:59.595 { 00:12:59.595 "name": null, 00:12:59.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.595 "is_configured": false, 00:12:59.595 "data_offset": 0, 00:12:59.595 "data_size": 65536 00:12:59.595 }, 00:12:59.595 { 00:12:59.595 "name": "BaseBdev2", 00:12:59.595 "uuid": "1e7c31e9-b634-4266-99c1-c670420f34cc", 00:12:59.595 "is_configured": true, 00:12:59.595 "data_offset": 0, 00:12:59.595 "data_size": 65536 00:12:59.595 } 00:12:59.595 ] 00:12:59.595 }' 00:12:59.595 14:14:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:59.595 14:14:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.161 14:14:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:00.161 14:14:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:00.161 14:14:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.161 14:14:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:00.420 14:14:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:00.420 14:14:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.420 14:14:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:00.678 [2024-11-18 14:14:52.628524] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.678 [2024-11-18 14:14:52.628771] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:13:00.678 14:14:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:00.678 14:14:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:00.678 14:14:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.678 14:14:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:00.937 14:14:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:00.937 14:14:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:00.937 14:14:52 -- bdev/bdev_raid.sh@287 -- # killprocess 122446 00:13:00.937 14:14:52 -- common/autotest_common.sh@936 -- # '[' -z 122446 ']' 00:13:00.937 14:14:52 -- common/autotest_common.sh@940 -- # kill -0 122446 00:13:00.937 14:14:52 -- common/autotest_common.sh@941 -- # uname 00:13:00.937 14:14:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:00.937 14:14:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122446 00:13:00.937 killing process with pid 122446 00:13:00.937 14:14:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:00.937 14:14:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:00.937 14:14:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122446' 00:13:00.937 14:14:52 -- common/autotest_common.sh@955 -- # kill 122446 00:13:00.937 [2024-11-18 14:14:52.921405] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.937 14:14:52 -- common/autotest_common.sh@960 -- # wait 122446 00:13:00.937 [2024-11-18 14:14:52.921479] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.196 ************************************ 00:13:01.196 END TEST raid_state_function_test 00:13:01.196 ************************************ 00:13:01.196 14:14:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:01.196 00:13:01.196 real 0m8.992s 00:13:01.196 user 0m16.227s 00:13:01.196 sys 0m1.139s 00:13:01.196 14:14:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.196 14:14:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.196 14:14:53 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:01.196 14:14:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:01.196 14:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.196 14:14:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 ************************************ 00:13:01.454 START TEST raid_state_function_test_sb 00:13:01.454 ************************************ 00:13:01.454 14:14:53 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=122756 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122756' 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:01.454 Process raid pid: 122756 00:13:01.454 14:14:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122756 /var/tmp/spdk-raid.sock 00:13:01.454 14:14:53 -- common/autotest_common.sh@829 -- # '[' -z 122756 ']' 00:13:01.454 14:14:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:01.454 14:14:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.454 14:14:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:01.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:01.454 14:14:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.454 14:14:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.454 [2024-11-18 14:14:53.338851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:01.454 [2024-11-18 14:14:53.339479] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.455 [2024-11-18 14:14:53.492295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.713 [2024-11-18 14:14:53.573051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.713 [2024-11-18 14:14:53.649957] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.281 14:14:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.281 14:14:54 -- common/autotest_common.sh@862 -- # return 0 00:13:02.281 14:14:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:02.539 [2024-11-18 14:14:54.407668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.539 [2024-11-18 14:14:54.407959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.539 [2024-11-18 14:14:54.408097] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.539 [2024-11-18 14:14:54.408165] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.539 14:14:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.798 14:14:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:02.798 "name": "Existed_Raid", 00:13:02.798 "uuid": "351416ed-1dd9-4eb1-948b-e224e5c33d7c", 00:13:02.798 "strip_size_kb": 64, 00:13:02.798 "state": "configuring", 00:13:02.798 "raid_level": "raid0", 00:13:02.798 "superblock": true, 00:13:02.798 "num_base_bdevs": 2, 00:13:02.798 "num_base_bdevs_discovered": 0, 00:13:02.798 "num_base_bdevs_operational": 2, 00:13:02.798 "base_bdevs_list": [ 00:13:02.798 { 00:13:02.798 "name": "BaseBdev1", 00:13:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.798 "is_configured": false, 00:13:02.798 "data_offset": 0, 00:13:02.798 "data_size": 0 00:13:02.798 }, 00:13:02.798 { 00:13:02.798 "name": "BaseBdev2", 00:13:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.798 "is_configured": false, 00:13:02.798 "data_offset": 0, 00:13:02.798 "data_size": 0 00:13:02.798 } 00:13:02.798 ] 00:13:02.798 }' 00:13:02.798 14:14:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:02.798 14:14:54 -- common/autotest_common.sh@10 -- # set +x 00:13:03.366 14:14:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:03.625 [2024-11-18 14:14:55.459692] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.625 [2024-11-18 14:14:55.459870] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:13:03.625 14:14:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:03.884 [2024-11-18 14:14:55.699772] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.884 [2024-11-18 14:14:55.699994] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.884 [2024-11-18 14:14:55.700110] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.884 [2024-11-18 14:14:55.700183] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.884 14:14:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.884 [2024-11-18 14:14:55.913953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.884 BaseBdev1 00:13:03.884 14:14:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:03.884 14:14:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:03.884 14:14:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:03.884 14:14:55 -- common/autotest_common.sh@899 -- # local i 00:13:03.884 14:14:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:03.884 14:14:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:03.884 14:14:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.144 14:14:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.403 [ 00:13:04.403 { 00:13:04.403 "name": "BaseBdev1", 00:13:04.403 "aliases": [ 00:13:04.403 "75701492-f1b3-46e4-bc5c-896fc735bb5b" 00:13:04.403 ], 00:13:04.403 "product_name": "Malloc disk", 00:13:04.403 "block_size": 512, 00:13:04.403 "num_blocks": 65536, 00:13:04.403 "uuid": "75701492-f1b3-46e4-bc5c-896fc735bb5b", 00:13:04.403 "assigned_rate_limits": { 00:13:04.403 "rw_ios_per_sec": 0, 00:13:04.403 "rw_mbytes_per_sec": 0, 00:13:04.403 "r_mbytes_per_sec": 0, 00:13:04.403 "w_mbytes_per_sec": 0 00:13:04.403 }, 00:13:04.403 "claimed": true, 00:13:04.403 "claim_type": "exclusive_write", 00:13:04.403 "zoned": false, 00:13:04.403 "supported_io_types": { 00:13:04.403 "read": true, 00:13:04.403 "write": true, 00:13:04.403 "unmap": true, 00:13:04.403 "write_zeroes": true, 00:13:04.403 "flush": true, 00:13:04.403 "reset": true, 00:13:04.403 "compare": false, 00:13:04.403 "compare_and_write": false, 00:13:04.403 "abort": true, 00:13:04.403 "nvme_admin": false, 00:13:04.403 "nvme_io": false 00:13:04.403 }, 00:13:04.403 "memory_domains": [ 00:13:04.403 { 00:13:04.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.403 "dma_device_type": 2 00:13:04.403 } 00:13:04.403 ], 00:13:04.403 "driver_specific": {} 00:13:04.403 } 00:13:04.403 ] 00:13:04.403 14:14:56 -- common/autotest_common.sh@905 -- # return 0 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.403 14:14:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.662 14:14:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:04.662 "name": "Existed_Raid", 00:13:04.662 "uuid": "e7185998-f90e-4758-ac36-4a028d484f19", 00:13:04.662 "strip_size_kb": 64, 00:13:04.662 "state": "configuring", 00:13:04.662 "raid_level": "raid0", 00:13:04.662 "superblock": true, 00:13:04.662 "num_base_bdevs": 2, 00:13:04.662 "num_base_bdevs_discovered": 1, 00:13:04.662 "num_base_bdevs_operational": 2, 00:13:04.662 "base_bdevs_list": [ 00:13:04.662 { 00:13:04.662 "name": "BaseBdev1", 00:13:04.662 "uuid": "75701492-f1b3-46e4-bc5c-896fc735bb5b", 00:13:04.662 "is_configured": true, 00:13:04.662 "data_offset": 2048, 00:13:04.662 "data_size": 63488 00:13:04.662 }, 00:13:04.662 { 00:13:04.662 "name": "BaseBdev2", 00:13:04.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.662 "is_configured": false, 00:13:04.662 "data_offset": 0, 00:13:04.662 "data_size": 0 00:13:04.662 } 00:13:04.662 ] 00:13:04.662 }' 00:13:04.662 14:14:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:04.662 14:14:56 -- common/autotest_common.sh@10 -- # set +x 00:13:05.231 14:14:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:05.490 [2024-11-18 14:14:57.410161] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:05.490 [2024-11-18 14:14:57.410363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:05.490 14:14:57 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:05.490 14:14:57 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:05.749 14:14:57 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:05.749 BaseBdev1 00:13:05.749 14:14:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:05.749 14:14:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:05.749 14:14:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:05.749 14:14:57 -- common/autotest_common.sh@899 -- # local i 00:13:05.749 14:14:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:05.749 14:14:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:05.749 14:14:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:06.007 14:14:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:06.267 [ 00:13:06.267 { 00:13:06.267 "name": "BaseBdev1", 00:13:06.267 "aliases": [ 00:13:06.267 "dcf719de-b833-49cb-bd79-b28a714f0ee7" 00:13:06.267 ], 00:13:06.267 "product_name": "Malloc disk", 00:13:06.267 "block_size": 512, 00:13:06.267 "num_blocks": 65536, 00:13:06.267 "uuid": "dcf719de-b833-49cb-bd79-b28a714f0ee7", 00:13:06.267 "assigned_rate_limits": { 00:13:06.267 "rw_ios_per_sec": 0, 00:13:06.267 "rw_mbytes_per_sec": 0, 00:13:06.267 "r_mbytes_per_sec": 0, 00:13:06.267 "w_mbytes_per_sec": 0 00:13:06.267 }, 00:13:06.267 "claimed": false, 00:13:06.267 "zoned": false, 00:13:06.267 "supported_io_types": { 00:13:06.267 "read": true, 00:13:06.267 "write": true, 00:13:06.267 "unmap": true, 00:13:06.267 "write_zeroes": true, 00:13:06.267 "flush": true, 00:13:06.267 "reset": true, 00:13:06.267 "compare": false, 00:13:06.267 "compare_and_write": false, 00:13:06.267 "abort": true, 00:13:06.267 "nvme_admin": false, 00:13:06.267 "nvme_io": false 00:13:06.267 }, 00:13:06.267 "memory_domains": [ 00:13:06.267 { 00:13:06.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.267 "dma_device_type": 2 00:13:06.267 } 00:13:06.267 ], 00:13:06.267 "driver_specific": {} 00:13:06.267 } 00:13:06.267 ] 00:13:06.267 14:14:58 -- common/autotest_common.sh@905 -- # return 0 00:13:06.267 14:14:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:06.527 [2024-11-18 14:14:58.380761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.527 [2024-11-18 14:14:58.382789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.527 [2024-11-18 14:14:58.382966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:06.527 "name": "Existed_Raid", 00:13:06.527 "uuid": "eb39ae75-e982-4201-bfab-f780b735513f", 00:13:06.527 "strip_size_kb": 64, 00:13:06.527 "state": "configuring", 00:13:06.527 "raid_level": "raid0", 00:13:06.527 "superblock": true, 00:13:06.527 "num_base_bdevs": 2, 00:13:06.527 "num_base_bdevs_discovered": 1, 00:13:06.527 "num_base_bdevs_operational": 2, 00:13:06.527 "base_bdevs_list": [ 00:13:06.527 { 00:13:06.527 "name": "BaseBdev1", 00:13:06.527 "uuid": "dcf719de-b833-49cb-bd79-b28a714f0ee7", 00:13:06.527 "is_configured": true, 00:13:06.527 "data_offset": 2048, 00:13:06.527 "data_size": 63488 00:13:06.527 }, 00:13:06.527 { 00:13:06.527 "name": "BaseBdev2", 00:13:06.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.527 "is_configured": false, 00:13:06.527 "data_offset": 0, 00:13:06.527 "data_size": 0 00:13:06.527 } 00:13:06.527 ] 00:13:06.527 }' 00:13:06.527 14:14:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:06.527 14:14:58 -- common/autotest_common.sh@10 -- # set +x 00:13:07.465 14:14:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:07.465 [2024-11-18 14:14:59.484158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.465 [2024-11-18 14:14:59.484645] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:13:07.465 [2024-11-18 14:14:59.484825] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:07.465 [2024-11-18 14:14:59.485196] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:07.465 BaseBdev2 00:13:07.465 [2024-11-18 14:14:59.486005] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:13:07.465 [2024-11-18 14:14:59.486176] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:13:07.465 [2024-11-18 14:14:59.486547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.465 14:14:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:07.465 14:14:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:07.465 14:14:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:07.465 14:14:59 -- common/autotest_common.sh@899 -- # local i 00:13:07.465 14:14:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:07.465 14:14:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:07.465 14:14:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.724 14:14:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:07.983 [ 00:13:07.983 { 00:13:07.983 "name": "BaseBdev2", 00:13:07.983 "aliases": [ 00:13:07.983 "3647b25f-cae7-44d6-ba82-ed490011323b" 00:13:07.983 ], 00:13:07.983 "product_name": "Malloc disk", 00:13:07.983 "block_size": 512, 00:13:07.983 "num_blocks": 65536, 00:13:07.983 "uuid": "3647b25f-cae7-44d6-ba82-ed490011323b", 00:13:07.983 "assigned_rate_limits": { 00:13:07.983 "rw_ios_per_sec": 0, 00:13:07.983 "rw_mbytes_per_sec": 0, 00:13:07.983 "r_mbytes_per_sec": 0, 00:13:07.983 "w_mbytes_per_sec": 0 00:13:07.983 }, 00:13:07.983 "claimed": true, 00:13:07.983 "claim_type": "exclusive_write", 00:13:07.983 "zoned": false, 00:13:07.983 "supported_io_types": { 00:13:07.983 "read": true, 00:13:07.983 "write": true, 00:13:07.983 "unmap": true, 00:13:07.983 "write_zeroes": true, 00:13:07.983 "flush": true, 00:13:07.983 "reset": true, 00:13:07.983 "compare": false, 00:13:07.983 "compare_and_write": false, 00:13:07.983 "abort": true, 00:13:07.983 "nvme_admin": false, 00:13:07.983 "nvme_io": false 00:13:07.983 }, 00:13:07.983 "memory_domains": [ 00:13:07.983 { 00:13:07.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.983 "dma_device_type": 2 00:13:07.983 } 00:13:07.983 ], 00:13:07.983 "driver_specific": {} 00:13:07.983 } 00:13:07.983 ] 00:13:07.983 14:14:59 -- common/autotest_common.sh@905 -- # return 0 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.983 14:14:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.241 14:15:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:08.241 "name": "Existed_Raid", 00:13:08.241 "uuid": "eb39ae75-e982-4201-bfab-f780b735513f", 00:13:08.241 "strip_size_kb": 64, 00:13:08.241 "state": "online", 00:13:08.241 "raid_level": "raid0", 00:13:08.241 "superblock": true, 00:13:08.241 "num_base_bdevs": 2, 00:13:08.241 "num_base_bdevs_discovered": 2, 00:13:08.241 "num_base_bdevs_operational": 2, 00:13:08.241 "base_bdevs_list": [ 00:13:08.241 { 00:13:08.241 "name": "BaseBdev1", 00:13:08.241 "uuid": "dcf719de-b833-49cb-bd79-b28a714f0ee7", 00:13:08.241 "is_configured": true, 00:13:08.241 "data_offset": 2048, 00:13:08.241 "data_size": 63488 00:13:08.241 }, 00:13:08.241 { 00:13:08.241 "name": "BaseBdev2", 00:13:08.241 "uuid": "3647b25f-cae7-44d6-ba82-ed490011323b", 00:13:08.241 "is_configured": true, 00:13:08.241 "data_offset": 2048, 00:13:08.242 "data_size": 63488 00:13:08.242 } 00:13:08.242 ] 00:13:08.242 }' 00:13:08.242 14:15:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:08.242 14:15:00 -- common/autotest_common.sh@10 -- # set +x 00:13:08.809 14:15:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:09.068 [2024-11-18 14:15:00.979346] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.068 [2024-11-18 14:15:00.979515] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.068 [2024-11-18 14:15:00.979704] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.068 14:15:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.327 14:15:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:09.327 "name": "Existed_Raid", 00:13:09.327 "uuid": "eb39ae75-e982-4201-bfab-f780b735513f", 00:13:09.327 "strip_size_kb": 64, 00:13:09.327 "state": "offline", 00:13:09.327 "raid_level": "raid0", 00:13:09.327 "superblock": true, 00:13:09.327 "num_base_bdevs": 2, 00:13:09.327 "num_base_bdevs_discovered": 1, 00:13:09.327 "num_base_bdevs_operational": 1, 00:13:09.327 "base_bdevs_list": [ 00:13:09.327 { 00:13:09.327 "name": null, 00:13:09.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.327 "is_configured": false, 00:13:09.327 "data_offset": 2048, 00:13:09.327 "data_size": 63488 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "name": "BaseBdev2", 00:13:09.327 "uuid": "3647b25f-cae7-44d6-ba82-ed490011323b", 00:13:09.327 "is_configured": true, 00:13:09.327 "data_offset": 2048, 00:13:09.327 "data_size": 63488 00:13:09.327 } 00:13:09.327 ] 00:13:09.327 }' 00:13:09.327 14:15:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:09.327 14:15:01 -- common/autotest_common.sh@10 -- # set +x 00:13:09.894 14:15:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:09.894 14:15:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:09.894 14:15:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.894 14:15:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:10.152 14:15:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:10.152 14:15:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:10.152 14:15:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:10.152 [2024-11-18 14:15:02.155800] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.153 [2024-11-18 14:15:02.156017] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:13:10.153 14:15:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:10.153 14:15:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:10.153 14:15:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.153 14:15:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:10.411 14:15:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:10.411 14:15:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:10.411 14:15:02 -- bdev/bdev_raid.sh@287 -- # killprocess 122756 00:13:10.411 14:15:02 -- common/autotest_common.sh@936 -- # '[' -z 122756 ']' 00:13:10.411 14:15:02 -- common/autotest_common.sh@940 -- # kill -0 122756 00:13:10.411 14:15:02 -- common/autotest_common.sh@941 -- # uname 00:13:10.411 14:15:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:10.411 14:15:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122756 00:13:10.411 14:15:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:10.411 14:15:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:10.411 14:15:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122756' 00:13:10.411 killing process with pid 122756 00:13:10.411 14:15:02 -- common/autotest_common.sh@955 -- # kill 122756 00:13:10.411 14:15:02 -- common/autotest_common.sh@960 -- # wait 122756 00:13:10.411 [2024-11-18 14:15:02.449323] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.411 [2024-11-18 14:15:02.449437] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.978 ************************************ 00:13:10.978 END TEST raid_state_function_test_sb 00:13:10.978 ************************************ 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:10.978 00:13:10.978 real 0m9.499s 00:13:10.978 user 0m16.986s 00:13:10.978 sys 0m1.357s 00:13:10.978 14:15:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:10.978 14:15:02 -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:13:10.978 14:15:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:10.978 14:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:10.978 14:15:02 -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 ************************************ 00:13:10.978 START TEST raid_superblock_test 00:13:10.978 ************************************ 00:13:10.978 14:15:02 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@357 -- # raid_pid=123073 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:10.978 14:15:02 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123073 /var/tmp/spdk-raid.sock 00:13:10.978 14:15:02 -- common/autotest_common.sh@829 -- # '[' -z 123073 ']' 00:13:10.978 14:15:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:10.978 14:15:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:10.978 14:15:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:10.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:10.978 14:15:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:10.978 14:15:02 -- common/autotest_common.sh@10 -- # set +x 00:13:10.978 [2024-11-18 14:15:02.897301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:10.978 [2024-11-18 14:15:02.897840] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123073 ] 00:13:10.978 [2024-11-18 14:15:03.049989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.237 [2024-11-18 14:15:03.140977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.237 [2024-11-18 14:15:03.221161] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.804 14:15:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.804 14:15:03 -- common/autotest_common.sh@862 -- # return 0 00:13:11.804 14:15:03 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:11.804 14:15:03 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.805 14:15:03 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:12.064 malloc1 00:13:12.064 14:15:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.324 [2024-11-18 14:15:04.341731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.324 [2024-11-18 14:15:04.341993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.324 [2024-11-18 14:15:04.342190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:13:12.324 [2024-11-18 14:15:04.342360] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.324 [2024-11-18 14:15:04.344945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.324 [2024-11-18 14:15:04.345146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.324 pt1 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.324 14:15:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:12.583 malloc2 00:13:12.583 14:15:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.841 [2024-11-18 14:15:04.799119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.841 [2024-11-18 14:15:04.799376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.841 [2024-11-18 14:15:04.799464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:12.841 [2024-11-18 14:15:04.799776] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.841 [2024-11-18 14:15:04.802263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.841 [2024-11-18 14:15:04.802464] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.841 pt2 00:13:12.841 14:15:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:12.841 14:15:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:12.841 14:15:04 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:13.099 [2024-11-18 14:15:05.027340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:13.099 [2024-11-18 14:15:05.029574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.099 [2024-11-18 14:15:05.029929] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:13:13.099 [2024-11-18 14:15:05.030059] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:13.099 [2024-11-18 14:15:05.030251] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:13:13.099 [2024-11-18 14:15:05.030752] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:13:13.099 [2024-11-18 14:15:05.030895] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:13:13.099 [2024-11-18 14:15:05.031243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.099 14:15:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.356 14:15:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:13.356 "name": "raid_bdev1", 00:13:13.356 "uuid": "21f31b84-8e47-4319-8bed-307577e73ce8", 00:13:13.356 "strip_size_kb": 64, 00:13:13.356 "state": "online", 00:13:13.356 "raid_level": "raid0", 00:13:13.356 "superblock": true, 00:13:13.356 "num_base_bdevs": 2, 00:13:13.356 "num_base_bdevs_discovered": 2, 00:13:13.356 "num_base_bdevs_operational": 2, 00:13:13.356 "base_bdevs_list": [ 00:13:13.356 { 00:13:13.356 "name": "pt1", 00:13:13.356 "uuid": "5782ae75-20a6-5e64-9752-e705580afef1", 00:13:13.356 "is_configured": true, 00:13:13.356 "data_offset": 2048, 00:13:13.356 "data_size": 63488 00:13:13.356 }, 00:13:13.356 { 00:13:13.356 "name": "pt2", 00:13:13.356 "uuid": "f90e03be-fd33-57bc-b38c-10ec422b4aa9", 00:13:13.356 "is_configured": true, 00:13:13.356 "data_offset": 2048, 00:13:13.356 "data_size": 63488 00:13:13.356 } 00:13:13.356 ] 00:13:13.356 }' 00:13:13.356 14:15:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:13.356 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:13:13.921 14:15:05 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:13.921 14:15:05 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:14.178 [2024-11-18 14:15:06.039613] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.178 14:15:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=21f31b84-8e47-4319-8bed-307577e73ce8 00:13:14.178 14:15:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 21f31b84-8e47-4319-8bed-307577e73ce8 ']' 00:13:14.178 14:15:06 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:14.437 [2024-11-18 14:15:06.295487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.437 [2024-11-18 14:15:06.295642] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.437 [2024-11-18 14:15:06.295845] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.437 [2024-11-18 14:15:06.296011] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.437 [2024-11-18 14:15:06.296123] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:13:14.437 14:15:06 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.437 14:15:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:14.437 14:15:06 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:14.437 14:15:06 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:14.437 14:15:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:14.437 14:15:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:14.697 14:15:06 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:14.697 14:15:06 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:14.955 14:15:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:14.955 14:15:07 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:15.215 14:15:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:15.215 14:15:07 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:15.215 14:15:07 -- common/autotest_common.sh@650 -- # local es=0 00:13:15.215 14:15:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:15.215 14:15:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.215 14:15:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.215 14:15:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.215 14:15:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.215 14:15:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.215 14:15:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.215 14:15:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:15.215 14:15:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:15.215 14:15:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:13:15.474 [2024-11-18 14:15:07.499688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:15.474 [2024-11-18 14:15:07.501820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:15.474 [2024-11-18 14:15:07.502013] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:15.474 [2024-11-18 14:15:07.502214] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:15.474 [2024-11-18 14:15:07.502301] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.474 [2024-11-18 14:15:07.502404] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:13:15.474 request: 00:13:15.474 { 00:13:15.474 "name": "raid_bdev1", 00:13:15.474 "raid_level": "raid0", 00:13:15.474 "base_bdevs": [ 00:13:15.474 "malloc1", 00:13:15.474 "malloc2" 00:13:15.474 ], 00:13:15.474 "superblock": false, 00:13:15.474 "strip_size_kb": 64, 00:13:15.474 "method": "bdev_raid_create", 00:13:15.474 "req_id": 1 00:13:15.474 } 00:13:15.474 Got JSON-RPC error response 00:13:15.475 response: 00:13:15.475 { 00:13:15.475 "code": -17, 00:13:15.475 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:15.475 } 00:13:15.475 14:15:07 -- common/autotest_common.sh@653 -- # es=1 00:13:15.475 14:15:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.475 14:15:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.475 14:15:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.475 14:15:07 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.475 14:15:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:15.733 14:15:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:15.733 14:15:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:15.733 14:15:07 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:15.992 [2024-11-18 14:15:07.895708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:15.992 [2024-11-18 14:15:07.895919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.992 [2024-11-18 14:15:07.896021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:15.992 [2024-11-18 14:15:07.896249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.992 [2024-11-18 14:15:07.898691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.992 [2024-11-18 14:15:07.898870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:15.992 [2024-11-18 14:15:07.899043] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:15.992 [2024-11-18 14:15:07.899247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:15.992 pt1 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.992 14:15:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.251 14:15:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:16.251 "name": "raid_bdev1", 00:13:16.251 "uuid": "21f31b84-8e47-4319-8bed-307577e73ce8", 00:13:16.251 "strip_size_kb": 64, 00:13:16.251 "state": "configuring", 00:13:16.251 "raid_level": "raid0", 00:13:16.251 "superblock": true, 00:13:16.251 "num_base_bdevs": 2, 00:13:16.251 "num_base_bdevs_discovered": 1, 00:13:16.251 "num_base_bdevs_operational": 2, 00:13:16.251 "base_bdevs_list": [ 00:13:16.251 { 00:13:16.251 "name": "pt1", 00:13:16.251 "uuid": "5782ae75-20a6-5e64-9752-e705580afef1", 00:13:16.251 "is_configured": true, 00:13:16.251 "data_offset": 2048, 00:13:16.251 "data_size": 63488 00:13:16.251 }, 00:13:16.251 { 00:13:16.251 "name": null, 00:13:16.251 "uuid": "f90e03be-fd33-57bc-b38c-10ec422b4aa9", 00:13:16.251 "is_configured": false, 00:13:16.251 "data_offset": 2048, 00:13:16.251 "data_size": 63488 00:13:16.251 } 00:13:16.251 ] 00:13:16.251 }' 00:13:16.251 14:15:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:16.251 14:15:08 -- common/autotest_common.sh@10 -- # set +x 00:13:16.819 14:15:08 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:13:16.819 14:15:08 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:16.819 14:15:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:16.819 14:15:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:17.078 [2024-11-18 14:15:08.951905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:17.078 [2024-11-18 14:15:08.952128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.078 [2024-11-18 14:15:08.952206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:17.078 [2024-11-18 14:15:08.952373] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.078 [2024-11-18 14:15:08.952838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.078 [2024-11-18 14:15:08.952916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:17.078 [2024-11-18 14:15:08.953116] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:17.078 [2024-11-18 14:15:08.953207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:17.078 [2024-11-18 14:15:08.953344] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:13:17.078 [2024-11-18 14:15:08.953418] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:17.078 [2024-11-18 14:15:08.953528] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:13:17.078 [2024-11-18 14:15:08.953863] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:13:17.078 [2024-11-18 14:15:08.954090] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:13:17.078 [2024-11-18 14:15:08.954228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.078 pt2 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.078 14:15:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.344 14:15:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:17.344 "name": "raid_bdev1", 00:13:17.344 "uuid": "21f31b84-8e47-4319-8bed-307577e73ce8", 00:13:17.344 "strip_size_kb": 64, 00:13:17.344 "state": "online", 00:13:17.344 "raid_level": "raid0", 00:13:17.344 "superblock": true, 00:13:17.344 "num_base_bdevs": 2, 00:13:17.344 "num_base_bdevs_discovered": 2, 00:13:17.344 "num_base_bdevs_operational": 2, 00:13:17.344 "base_bdevs_list": [ 00:13:17.344 { 00:13:17.344 "name": "pt1", 00:13:17.344 "uuid": "5782ae75-20a6-5e64-9752-e705580afef1", 00:13:17.344 "is_configured": true, 00:13:17.344 "data_offset": 2048, 00:13:17.344 "data_size": 63488 00:13:17.344 }, 00:13:17.344 { 00:13:17.344 "name": "pt2", 00:13:17.344 "uuid": "f90e03be-fd33-57bc-b38c-10ec422b4aa9", 00:13:17.344 "is_configured": true, 00:13:17.344 "data_offset": 2048, 00:13:17.344 "data_size": 63488 00:13:17.344 } 00:13:17.344 ] 00:13:17.344 }' 00:13:17.344 14:15:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:17.344 14:15:09 -- common/autotest_common.sh@10 -- # set +x 00:13:17.910 14:15:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:17.910 14:15:09 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:18.168 [2024-11-18 14:15:10.116262] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.168 14:15:10 -- bdev/bdev_raid.sh@430 -- # '[' 21f31b84-8e47-4319-8bed-307577e73ce8 '!=' 21f31b84-8e47-4319-8bed-307577e73ce8 ']' 00:13:18.168 14:15:10 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:13:18.168 14:15:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:18.168 14:15:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:18.168 14:15:10 -- bdev/bdev_raid.sh@511 -- # killprocess 123073 00:13:18.168 14:15:10 -- common/autotest_common.sh@936 -- # '[' -z 123073 ']' 00:13:18.168 14:15:10 -- common/autotest_common.sh@940 -- # kill -0 123073 00:13:18.168 14:15:10 -- common/autotest_common.sh@941 -- # uname 00:13:18.168 14:15:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.168 14:15:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123073 00:13:18.168 14:15:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.168 14:15:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.168 14:15:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123073' 00:13:18.168 killing process with pid 123073 00:13:18.168 14:15:10 -- common/autotest_common.sh@955 -- # kill 123073 00:13:18.168 [2024-11-18 14:15:10.157253] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.168 14:15:10 -- common/autotest_common.sh@960 -- # wait 123073 00:13:18.168 [2024-11-18 14:15:10.157524] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.168 [2024-11-18 14:15:10.157691] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.168 [2024-11-18 14:15:10.157860] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:13:18.168 [2024-11-18 14:15:10.183880] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.426 14:15:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:18.426 00:13:18.426 real 0m7.646s 00:13:18.426 user 0m13.611s 00:13:18.426 sys 0m1.079s 00:13:18.426 14:15:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:18.426 ************************************ 00:13:18.426 END TEST raid_superblock_test 00:13:18.426 ************************************ 00:13:18.426 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:13:18.685 14:15:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:18.685 14:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.685 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:13:18.685 ************************************ 00:13:18.685 START TEST raid_state_function_test 00:13:18.685 ************************************ 00:13:18.685 14:15:10 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:18.685 14:15:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=123318 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123318' 00:13:18.686 Process raid pid: 123318 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123318 /var/tmp/spdk-raid.sock 00:13:18.686 14:15:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:18.686 14:15:10 -- common/autotest_common.sh@829 -- # '[' -z 123318 ']' 00:13:18.686 14:15:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:18.686 14:15:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:18.686 14:15:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:18.686 14:15:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.686 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:13:18.686 [2024-11-18 14:15:10.589562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:18.686 [2024-11-18 14:15:10.590016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.686 [2024-11-18 14:15:10.728967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.944 [2024-11-18 14:15:10.805994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.944 [2024-11-18 14:15:10.876610] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.512 14:15:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.512 14:15:11 -- common/autotest_common.sh@862 -- # return 0 00:13:19.512 14:15:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:19.771 [2024-11-18 14:15:11.774859] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.771 [2024-11-18 14:15:11.774956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.771 [2024-11-18 14:15:11.774972] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.771 [2024-11-18 14:15:11.774994] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.771 14:15:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.030 14:15:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.030 "name": "Existed_Raid", 00:13:20.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.030 "strip_size_kb": 64, 00:13:20.030 "state": "configuring", 00:13:20.030 "raid_level": "concat", 00:13:20.030 "superblock": false, 00:13:20.030 "num_base_bdevs": 2, 00:13:20.031 "num_base_bdevs_discovered": 0, 00:13:20.031 "num_base_bdevs_operational": 2, 00:13:20.031 "base_bdevs_list": [ 00:13:20.031 { 00:13:20.031 "name": "BaseBdev1", 00:13:20.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.031 "is_configured": false, 00:13:20.031 "data_offset": 0, 00:13:20.031 "data_size": 0 00:13:20.031 }, 00:13:20.031 { 00:13:20.031 "name": "BaseBdev2", 00:13:20.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.031 "is_configured": false, 00:13:20.031 "data_offset": 0, 00:13:20.031 "data_size": 0 00:13:20.031 } 00:13:20.031 ] 00:13:20.031 }' 00:13:20.031 14:15:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.031 14:15:12 -- common/autotest_common.sh@10 -- # set +x 00:13:20.598 14:15:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:20.857 [2024-11-18 14:15:12.886879] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.857 [2024-11-18 14:15:12.886917] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:13:20.857 14:15:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:21.116 [2024-11-18 14:15:13.150946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.116 [2024-11-18 14:15:13.151019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.116 [2024-11-18 14:15:13.151032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.116 [2024-11-18 14:15:13.151060] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.116 14:15:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.376 [2024-11-18 14:15:13.353174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.376 BaseBdev1 00:13:21.376 14:15:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:21.376 14:15:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:21.376 14:15:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:21.376 14:15:13 -- common/autotest_common.sh@899 -- # local i 00:13:21.376 14:15:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:21.376 14:15:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:21.376 14:15:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.635 14:15:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.893 [ 00:13:21.893 { 00:13:21.893 "name": "BaseBdev1", 00:13:21.893 "aliases": [ 00:13:21.893 "b0512aad-7f76-4397-9b59-55a4ed275ad1" 00:13:21.893 ], 00:13:21.893 "product_name": "Malloc disk", 00:13:21.893 "block_size": 512, 00:13:21.893 "num_blocks": 65536, 00:13:21.893 "uuid": "b0512aad-7f76-4397-9b59-55a4ed275ad1", 00:13:21.893 "assigned_rate_limits": { 00:13:21.893 "rw_ios_per_sec": 0, 00:13:21.893 "rw_mbytes_per_sec": 0, 00:13:21.893 "r_mbytes_per_sec": 0, 00:13:21.893 "w_mbytes_per_sec": 0 00:13:21.893 }, 00:13:21.893 "claimed": true, 00:13:21.893 "claim_type": "exclusive_write", 00:13:21.893 "zoned": false, 00:13:21.893 "supported_io_types": { 00:13:21.893 "read": true, 00:13:21.893 "write": true, 00:13:21.893 "unmap": true, 00:13:21.893 "write_zeroes": true, 00:13:21.893 "flush": true, 00:13:21.893 "reset": true, 00:13:21.893 "compare": false, 00:13:21.893 "compare_and_write": false, 00:13:21.893 "abort": true, 00:13:21.893 "nvme_admin": false, 00:13:21.893 "nvme_io": false 00:13:21.893 }, 00:13:21.893 "memory_domains": [ 00:13:21.893 { 00:13:21.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.893 "dma_device_type": 2 00:13:21.893 } 00:13:21.893 ], 00:13:21.893 "driver_specific": {} 00:13:21.893 } 00:13:21.893 ] 00:13:21.893 14:15:13 -- common/autotest_common.sh@905 -- # return 0 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.893 14:15:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.152 14:15:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.152 "name": "Existed_Raid", 00:13:22.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.152 "strip_size_kb": 64, 00:13:22.152 "state": "configuring", 00:13:22.152 "raid_level": "concat", 00:13:22.152 "superblock": false, 00:13:22.152 "num_base_bdevs": 2, 00:13:22.152 "num_base_bdevs_discovered": 1, 00:13:22.152 "num_base_bdevs_operational": 2, 00:13:22.152 "base_bdevs_list": [ 00:13:22.152 { 00:13:22.152 "name": "BaseBdev1", 00:13:22.152 "uuid": "b0512aad-7f76-4397-9b59-55a4ed275ad1", 00:13:22.152 "is_configured": true, 00:13:22.152 "data_offset": 0, 00:13:22.152 "data_size": 65536 00:13:22.152 }, 00:13:22.152 { 00:13:22.152 "name": "BaseBdev2", 00:13:22.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.152 "is_configured": false, 00:13:22.152 "data_offset": 0, 00:13:22.152 "data_size": 0 00:13:22.152 } 00:13:22.152 ] 00:13:22.152 }' 00:13:22.152 14:15:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.152 14:15:13 -- common/autotest_common.sh@10 -- # set +x 00:13:22.750 14:15:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:23.024 [2024-11-18 14:15:14.873457] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.024 [2024-11-18 14:15:14.873526] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:23.024 14:15:14 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:23.024 14:15:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:23.303 [2024-11-18 14:15:15.109599] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.303 [2024-11-18 14:15:15.111720] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.303 [2024-11-18 14:15:15.111793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.303 "name": "Existed_Raid", 00:13:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.303 "strip_size_kb": 64, 00:13:23.303 "state": "configuring", 00:13:23.303 "raid_level": "concat", 00:13:23.303 "superblock": false, 00:13:23.303 "num_base_bdevs": 2, 00:13:23.303 "num_base_bdevs_discovered": 1, 00:13:23.303 "num_base_bdevs_operational": 2, 00:13:23.303 "base_bdevs_list": [ 00:13:23.303 { 00:13:23.303 "name": "BaseBdev1", 00:13:23.303 "uuid": "b0512aad-7f76-4397-9b59-55a4ed275ad1", 00:13:23.303 "is_configured": true, 00:13:23.303 "data_offset": 0, 00:13:23.303 "data_size": 65536 00:13:23.303 }, 00:13:23.303 { 00:13:23.303 "name": "BaseBdev2", 00:13:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.303 "is_configured": false, 00:13:23.303 "data_offset": 0, 00:13:23.303 "data_size": 0 00:13:23.303 } 00:13:23.303 ] 00:13:23.303 }' 00:13:23.303 14:15:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.303 14:15:15 -- common/autotest_common.sh@10 -- # set +x 00:13:23.881 14:15:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.140 [2024-11-18 14:15:16.192410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.140 [2024-11-18 14:15:16.192497] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:13:24.140 [2024-11-18 14:15:16.192517] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:24.140 [2024-11-18 14:15:16.192735] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:13:24.140 [2024-11-18 14:15:16.193368] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:13:24.140 [2024-11-18 14:15:16.193401] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:13:24.140 [2024-11-18 14:15:16.193815] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.140 BaseBdev2 00:13:24.140 14:15:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:24.140 14:15:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:24.140 14:15:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:24.140 14:15:16 -- common/autotest_common.sh@899 -- # local i 00:13:24.140 14:15:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:24.140 14:15:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:24.140 14:15:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.399 14:15:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.658 [ 00:13:24.658 { 00:13:24.658 "name": "BaseBdev2", 00:13:24.658 "aliases": [ 00:13:24.658 "c99f9ad9-94b5-4b38-86b3-bb1371653297" 00:13:24.658 ], 00:13:24.658 "product_name": "Malloc disk", 00:13:24.658 "block_size": 512, 00:13:24.658 "num_blocks": 65536, 00:13:24.658 "uuid": "c99f9ad9-94b5-4b38-86b3-bb1371653297", 00:13:24.658 "assigned_rate_limits": { 00:13:24.658 "rw_ios_per_sec": 0, 00:13:24.658 "rw_mbytes_per_sec": 0, 00:13:24.658 "r_mbytes_per_sec": 0, 00:13:24.658 "w_mbytes_per_sec": 0 00:13:24.658 }, 00:13:24.658 "claimed": true, 00:13:24.658 "claim_type": "exclusive_write", 00:13:24.658 "zoned": false, 00:13:24.658 "supported_io_types": { 00:13:24.658 "read": true, 00:13:24.658 "write": true, 00:13:24.658 "unmap": true, 00:13:24.658 "write_zeroes": true, 00:13:24.658 "flush": true, 00:13:24.658 "reset": true, 00:13:24.658 "compare": false, 00:13:24.658 "compare_and_write": false, 00:13:24.658 "abort": true, 00:13:24.658 "nvme_admin": false, 00:13:24.658 "nvme_io": false 00:13:24.658 }, 00:13:24.658 "memory_domains": [ 00:13:24.658 { 00:13:24.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.658 "dma_device_type": 2 00:13:24.658 } 00:13:24.658 ], 00:13:24.658 "driver_specific": {} 00:13:24.658 } 00:13:24.658 ] 00:13:24.658 14:15:16 -- common/autotest_common.sh@905 -- # return 0 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.658 14:15:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.917 14:15:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:24.917 "name": "Existed_Raid", 00:13:24.917 "uuid": "f612a0ee-736f-442b-a3cc-90a03b39b150", 00:13:24.917 "strip_size_kb": 64, 00:13:24.917 "state": "online", 00:13:24.917 "raid_level": "concat", 00:13:24.917 "superblock": false, 00:13:24.917 "num_base_bdevs": 2, 00:13:24.917 "num_base_bdevs_discovered": 2, 00:13:24.917 "num_base_bdevs_operational": 2, 00:13:24.917 "base_bdevs_list": [ 00:13:24.917 { 00:13:24.917 "name": "BaseBdev1", 00:13:24.917 "uuid": "b0512aad-7f76-4397-9b59-55a4ed275ad1", 00:13:24.917 "is_configured": true, 00:13:24.917 "data_offset": 0, 00:13:24.917 "data_size": 65536 00:13:24.917 }, 00:13:24.917 { 00:13:24.917 "name": "BaseBdev2", 00:13:24.917 "uuid": "c99f9ad9-94b5-4b38-86b3-bb1371653297", 00:13:24.917 "is_configured": true, 00:13:24.917 "data_offset": 0, 00:13:24.917 "data_size": 65536 00:13:24.917 } 00:13:24.917 ] 00:13:24.917 }' 00:13:24.917 14:15:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:24.917 14:15:16 -- common/autotest_common.sh@10 -- # set +x 00:13:25.484 14:15:17 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:25.743 [2024-11-18 14:15:17.664848] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:25.743 [2024-11-18 14:15:17.664881] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.743 [2024-11-18 14:15:17.664971] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.743 14:15:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.002 14:15:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.002 "name": "Existed_Raid", 00:13:26.002 "uuid": "f612a0ee-736f-442b-a3cc-90a03b39b150", 00:13:26.002 "strip_size_kb": 64, 00:13:26.002 "state": "offline", 00:13:26.002 "raid_level": "concat", 00:13:26.002 "superblock": false, 00:13:26.002 "num_base_bdevs": 2, 00:13:26.002 "num_base_bdevs_discovered": 1, 00:13:26.002 "num_base_bdevs_operational": 1, 00:13:26.002 "base_bdevs_list": [ 00:13:26.002 { 00:13:26.002 "name": null, 00:13:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.002 "is_configured": false, 00:13:26.002 "data_offset": 0, 00:13:26.002 "data_size": 65536 00:13:26.002 }, 00:13:26.002 { 00:13:26.002 "name": "BaseBdev2", 00:13:26.002 "uuid": "c99f9ad9-94b5-4b38-86b3-bb1371653297", 00:13:26.002 "is_configured": true, 00:13:26.002 "data_offset": 0, 00:13:26.002 "data_size": 65536 00:13:26.002 } 00:13:26.002 ] 00:13:26.002 }' 00:13:26.002 14:15:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.002 14:15:17 -- common/autotest_common.sh@10 -- # set +x 00:13:26.569 14:15:18 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:26.569 14:15:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:26.569 14:15:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.569 14:15:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:26.828 14:15:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:26.828 14:15:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.828 14:15:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:27.086 [2024-11-18 14:15:18.950930] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:27.086 [2024-11-18 14:15:18.951031] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:13:27.086 14:15:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:27.086 14:15:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:27.086 14:15:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.086 14:15:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:27.344 14:15:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:27.344 14:15:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:27.344 14:15:19 -- bdev/bdev_raid.sh@287 -- # killprocess 123318 00:13:27.344 14:15:19 -- common/autotest_common.sh@936 -- # '[' -z 123318 ']' 00:13:27.344 14:15:19 -- common/autotest_common.sh@940 -- # kill -0 123318 00:13:27.344 14:15:19 -- common/autotest_common.sh@941 -- # uname 00:13:27.344 14:15:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.344 14:15:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123318 00:13:27.344 14:15:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:27.344 14:15:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:27.344 14:15:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123318' 00:13:27.344 killing process with pid 123318 00:13:27.344 14:15:19 -- common/autotest_common.sh@955 -- # kill 123318 00:13:27.344 [2024-11-18 14:15:19.203885] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.344 [2024-11-18 14:15:19.203971] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:27.344 14:15:19 -- common/autotest_common.sh@960 -- # wait 123318 00:13:27.602 ************************************ 00:13:27.602 END TEST raid_state_function_test 00:13:27.602 ************************************ 00:13:27.602 14:15:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:27.602 00:13:27.602 real 0m8.898s 00:13:27.602 user 0m16.190s 00:13:27.602 sys 0m1.139s 00:13:27.602 14:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:27.602 14:15:19 -- common/autotest_common.sh@10 -- # set +x 00:13:27.602 14:15:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:13:27.603 14:15:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:27.603 14:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:27.603 14:15:19 -- common/autotest_common.sh@10 -- # set +x 00:13:27.603 ************************************ 00:13:27.603 START TEST raid_state_function_test_sb 00:13:27.603 ************************************ 00:13:27.603 14:15:19 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=123625 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123625' 00:13:27.603 Process raid pid: 123625 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:27.603 14:15:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123625 /var/tmp/spdk-raid.sock 00:13:27.603 14:15:19 -- common/autotest_common.sh@829 -- # '[' -z 123625 ']' 00:13:27.603 14:15:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:27.603 14:15:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:27.603 14:15:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:27.603 14:15:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.603 14:15:19 -- common/autotest_common.sh@10 -- # set +x 00:13:27.603 [2024-11-18 14:15:19.535737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:27.603 [2024-11-18 14:15:19.535906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.861 [2024-11-18 14:15:19.680110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.861 [2024-11-18 14:15:19.770398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.861 [2024-11-18 14:15:19.846758] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.429 14:15:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.429 14:15:20 -- common/autotest_common.sh@862 -- # return 0 00:13:28.429 14:15:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:28.688 [2024-11-18 14:15:20.600368] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:28.688 [2024-11-18 14:15:20.600581] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:28.688 [2024-11-18 14:15:20.600705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:28.688 [2024-11-18 14:15:20.600772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.688 14:15:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.947 14:15:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.947 "name": "Existed_Raid", 00:13:28.947 "uuid": "e0f64f4f-b31a-4732-8238-5b1b781c560b", 00:13:28.947 "strip_size_kb": 64, 00:13:28.947 "state": "configuring", 00:13:28.947 "raid_level": "concat", 00:13:28.947 "superblock": true, 00:13:28.947 "num_base_bdevs": 2, 00:13:28.947 "num_base_bdevs_discovered": 0, 00:13:28.947 "num_base_bdevs_operational": 2, 00:13:28.947 "base_bdevs_list": [ 00:13:28.947 { 00:13:28.947 "name": "BaseBdev1", 00:13:28.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.947 "is_configured": false, 00:13:28.947 "data_offset": 0, 00:13:28.947 "data_size": 0 00:13:28.947 }, 00:13:28.947 { 00:13:28.947 "name": "BaseBdev2", 00:13:28.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.947 "is_configured": false, 00:13:28.947 "data_offset": 0, 00:13:28.947 "data_size": 0 00:13:28.947 } 00:13:28.947 ] 00:13:28.947 }' 00:13:28.947 14:15:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.947 14:15:20 -- common/autotest_common.sh@10 -- # set +x 00:13:29.514 14:15:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:29.773 [2024-11-18 14:15:21.628379] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:29.773 [2024-11-18 14:15:21.628534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:13:29.773 14:15:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:29.773 [2024-11-18 14:15:21.812463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.773 [2024-11-18 14:15:21.812663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.773 [2024-11-18 14:15:21.812780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:29.773 [2024-11-18 14:15:21.812853] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:29.773 14:15:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.032 [2024-11-18 14:15:22.078352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.032 BaseBdev1 00:13:30.032 14:15:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:30.032 14:15:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:30.032 14:15:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:30.032 14:15:22 -- common/autotest_common.sh@899 -- # local i 00:13:30.032 14:15:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:30.032 14:15:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:30.032 14:15:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:30.291 14:15:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.550 [ 00:13:30.550 { 00:13:30.550 "name": "BaseBdev1", 00:13:30.550 "aliases": [ 00:13:30.550 "6d207ec8-9b17-41b9-a08a-2e7cb27e14ac" 00:13:30.550 ], 00:13:30.550 "product_name": "Malloc disk", 00:13:30.550 "block_size": 512, 00:13:30.550 "num_blocks": 65536, 00:13:30.550 "uuid": "6d207ec8-9b17-41b9-a08a-2e7cb27e14ac", 00:13:30.550 "assigned_rate_limits": { 00:13:30.550 "rw_ios_per_sec": 0, 00:13:30.550 "rw_mbytes_per_sec": 0, 00:13:30.550 "r_mbytes_per_sec": 0, 00:13:30.550 "w_mbytes_per_sec": 0 00:13:30.550 }, 00:13:30.550 "claimed": true, 00:13:30.550 "claim_type": "exclusive_write", 00:13:30.550 "zoned": false, 00:13:30.550 "supported_io_types": { 00:13:30.550 "read": true, 00:13:30.550 "write": true, 00:13:30.550 "unmap": true, 00:13:30.550 "write_zeroes": true, 00:13:30.550 "flush": true, 00:13:30.550 "reset": true, 00:13:30.550 "compare": false, 00:13:30.550 "compare_and_write": false, 00:13:30.550 "abort": true, 00:13:30.550 "nvme_admin": false, 00:13:30.550 "nvme_io": false 00:13:30.550 }, 00:13:30.550 "memory_domains": [ 00:13:30.550 { 00:13:30.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.550 "dma_device_type": 2 00:13:30.550 } 00:13:30.550 ], 00:13:30.550 "driver_specific": {} 00:13:30.550 } 00:13:30.550 ] 00:13:30.550 14:15:22 -- common/autotest_common.sh@905 -- # return 0 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.550 14:15:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.809 14:15:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.809 "name": "Existed_Raid", 00:13:30.809 "uuid": "26152a81-fd1a-4260-8fbb-9af5b3d0e9d1", 00:13:30.809 "strip_size_kb": 64, 00:13:30.809 "state": "configuring", 00:13:30.809 "raid_level": "concat", 00:13:30.809 "superblock": true, 00:13:30.809 "num_base_bdevs": 2, 00:13:30.809 "num_base_bdevs_discovered": 1, 00:13:30.809 "num_base_bdevs_operational": 2, 00:13:30.809 "base_bdevs_list": [ 00:13:30.809 { 00:13:30.809 "name": "BaseBdev1", 00:13:30.809 "uuid": "6d207ec8-9b17-41b9-a08a-2e7cb27e14ac", 00:13:30.809 "is_configured": true, 00:13:30.809 "data_offset": 2048, 00:13:30.809 "data_size": 63488 00:13:30.809 }, 00:13:30.809 { 00:13:30.809 "name": "BaseBdev2", 00:13:30.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.809 "is_configured": false, 00:13:30.809 "data_offset": 0, 00:13:30.809 "data_size": 0 00:13:30.809 } 00:13:30.809 ] 00:13:30.809 }' 00:13:30.809 14:15:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.809 14:15:22 -- common/autotest_common.sh@10 -- # set +x 00:13:31.377 14:15:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:31.377 [2024-11-18 14:15:23.382688] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:31.377 [2024-11-18 14:15:23.382860] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:31.377 14:15:23 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:31.377 14:15:23 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:31.635 14:15:23 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.894 BaseBdev1 00:13:31.894 14:15:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:31.894 14:15:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:31.894 14:15:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:31.894 14:15:23 -- common/autotest_common.sh@899 -- # local i 00:13:31.894 14:15:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:31.894 14:15:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:31.894 14:15:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:32.153 14:15:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:32.153 [ 00:13:32.153 { 00:13:32.153 "name": "BaseBdev1", 00:13:32.153 "aliases": [ 00:13:32.153 "337cb6a6-a1a0-4ff7-bf0f-c31f6f001943" 00:13:32.153 ], 00:13:32.153 "product_name": "Malloc disk", 00:13:32.153 "block_size": 512, 00:13:32.153 "num_blocks": 65536, 00:13:32.153 "uuid": "337cb6a6-a1a0-4ff7-bf0f-c31f6f001943", 00:13:32.153 "assigned_rate_limits": { 00:13:32.153 "rw_ios_per_sec": 0, 00:13:32.153 "rw_mbytes_per_sec": 0, 00:13:32.153 "r_mbytes_per_sec": 0, 00:13:32.153 "w_mbytes_per_sec": 0 00:13:32.153 }, 00:13:32.153 "claimed": false, 00:13:32.153 "zoned": false, 00:13:32.153 "supported_io_types": { 00:13:32.153 "read": true, 00:13:32.153 "write": true, 00:13:32.153 "unmap": true, 00:13:32.153 "write_zeroes": true, 00:13:32.153 "flush": true, 00:13:32.153 "reset": true, 00:13:32.153 "compare": false, 00:13:32.153 "compare_and_write": false, 00:13:32.153 "abort": true, 00:13:32.153 "nvme_admin": false, 00:13:32.153 "nvme_io": false 00:13:32.153 }, 00:13:32.153 "memory_domains": [ 00:13:32.153 { 00:13:32.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.153 "dma_device_type": 2 00:13:32.153 } 00:13:32.153 ], 00:13:32.153 "driver_specific": {} 00:13:32.153 } 00:13:32.153 ] 00:13:32.412 14:15:24 -- common/autotest_common.sh@905 -- # return 0 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:32.412 [2024-11-18 14:15:24.395404] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.412 [2024-11-18 14:15:24.397528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.412 [2024-11-18 14:15:24.397722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.412 14:15:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.670 14:15:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:32.670 "name": "Existed_Raid", 00:13:32.670 "uuid": "e1cdebb3-eee4-4b69-ab2b-a78315603a72", 00:13:32.670 "strip_size_kb": 64, 00:13:32.670 "state": "configuring", 00:13:32.670 "raid_level": "concat", 00:13:32.670 "superblock": true, 00:13:32.670 "num_base_bdevs": 2, 00:13:32.670 "num_base_bdevs_discovered": 1, 00:13:32.670 "num_base_bdevs_operational": 2, 00:13:32.670 "base_bdevs_list": [ 00:13:32.670 { 00:13:32.670 "name": "BaseBdev1", 00:13:32.670 "uuid": "337cb6a6-a1a0-4ff7-bf0f-c31f6f001943", 00:13:32.670 "is_configured": true, 00:13:32.670 "data_offset": 2048, 00:13:32.670 "data_size": 63488 00:13:32.670 }, 00:13:32.670 { 00:13:32.670 "name": "BaseBdev2", 00:13:32.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.670 "is_configured": false, 00:13:32.670 "data_offset": 0, 00:13:32.670 "data_size": 0 00:13:32.670 } 00:13:32.670 ] 00:13:32.670 }' 00:13:32.670 14:15:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:32.670 14:15:24 -- common/autotest_common.sh@10 -- # set +x 00:13:33.236 14:15:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:33.495 [2024-11-18 14:15:25.343845] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.495 [2024-11-18 14:15:25.344304] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:13:33.495 [2024-11-18 14:15:25.344441] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:33.495 [2024-11-18 14:15:25.344637] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:33.495 BaseBdev2 00:13:33.495 [2024-11-18 14:15:25.345262] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:13:33.495 [2024-11-18 14:15:25.345382] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:13:33.495 [2024-11-18 14:15:25.345700] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.495 14:15:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:33.495 14:15:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:33.495 14:15:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:33.495 14:15:25 -- common/autotest_common.sh@899 -- # local i 00:13:33.495 14:15:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:33.495 14:15:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:33.495 14:15:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:33.754 14:15:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:34.012 [ 00:13:34.012 { 00:13:34.012 "name": "BaseBdev2", 00:13:34.012 "aliases": [ 00:13:34.012 "566abf30-acda-4156-8b75-5f3c5ff02caf" 00:13:34.012 ], 00:13:34.012 "product_name": "Malloc disk", 00:13:34.012 "block_size": 512, 00:13:34.012 "num_blocks": 65536, 00:13:34.012 "uuid": "566abf30-acda-4156-8b75-5f3c5ff02caf", 00:13:34.012 "assigned_rate_limits": { 00:13:34.012 "rw_ios_per_sec": 0, 00:13:34.012 "rw_mbytes_per_sec": 0, 00:13:34.012 "r_mbytes_per_sec": 0, 00:13:34.012 "w_mbytes_per_sec": 0 00:13:34.012 }, 00:13:34.012 "claimed": true, 00:13:34.012 "claim_type": "exclusive_write", 00:13:34.012 "zoned": false, 00:13:34.012 "supported_io_types": { 00:13:34.012 "read": true, 00:13:34.012 "write": true, 00:13:34.012 "unmap": true, 00:13:34.013 "write_zeroes": true, 00:13:34.013 "flush": true, 00:13:34.013 "reset": true, 00:13:34.013 "compare": false, 00:13:34.013 "compare_and_write": false, 00:13:34.013 "abort": true, 00:13:34.013 "nvme_admin": false, 00:13:34.013 "nvme_io": false 00:13:34.013 }, 00:13:34.013 "memory_domains": [ 00:13:34.013 { 00:13:34.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.013 "dma_device_type": 2 00:13:34.013 } 00:13:34.013 ], 00:13:34.013 "driver_specific": {} 00:13:34.013 } 00:13:34.013 ] 00:13:34.013 14:15:25 -- common/autotest_common.sh@905 -- # return 0 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.013 14:15:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.013 14:15:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.013 "name": "Existed_Raid", 00:13:34.013 "uuid": "e1cdebb3-eee4-4b69-ab2b-a78315603a72", 00:13:34.013 "strip_size_kb": 64, 00:13:34.013 "state": "online", 00:13:34.013 "raid_level": "concat", 00:13:34.013 "superblock": true, 00:13:34.013 "num_base_bdevs": 2, 00:13:34.013 "num_base_bdevs_discovered": 2, 00:13:34.013 "num_base_bdevs_operational": 2, 00:13:34.013 "base_bdevs_list": [ 00:13:34.013 { 00:13:34.013 "name": "BaseBdev1", 00:13:34.013 "uuid": "337cb6a6-a1a0-4ff7-bf0f-c31f6f001943", 00:13:34.013 "is_configured": true, 00:13:34.013 "data_offset": 2048, 00:13:34.013 "data_size": 63488 00:13:34.013 }, 00:13:34.013 { 00:13:34.013 "name": "BaseBdev2", 00:13:34.013 "uuid": "566abf30-acda-4156-8b75-5f3c5ff02caf", 00:13:34.013 "is_configured": true, 00:13:34.013 "data_offset": 2048, 00:13:34.013 "data_size": 63488 00:13:34.013 } 00:13:34.013 ] 00:13:34.013 }' 00:13:34.013 14:15:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.013 14:15:26 -- common/autotest_common.sh@10 -- # set +x 00:13:34.580 14:15:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:34.839 [2024-11-18 14:15:26.792171] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.839 [2024-11-18 14:15:26.792199] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.839 [2024-11-18 14:15:26.792273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.839 14:15:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.098 14:15:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:35.098 "name": "Existed_Raid", 00:13:35.098 "uuid": "e1cdebb3-eee4-4b69-ab2b-a78315603a72", 00:13:35.098 "strip_size_kb": 64, 00:13:35.098 "state": "offline", 00:13:35.098 "raid_level": "concat", 00:13:35.098 "superblock": true, 00:13:35.098 "num_base_bdevs": 2, 00:13:35.098 "num_base_bdevs_discovered": 1, 00:13:35.098 "num_base_bdevs_operational": 1, 00:13:35.098 "base_bdevs_list": [ 00:13:35.098 { 00:13:35.098 "name": null, 00:13:35.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.098 "is_configured": false, 00:13:35.098 "data_offset": 2048, 00:13:35.098 "data_size": 63488 00:13:35.098 }, 00:13:35.098 { 00:13:35.098 "name": "BaseBdev2", 00:13:35.098 "uuid": "566abf30-acda-4156-8b75-5f3c5ff02caf", 00:13:35.098 "is_configured": true, 00:13:35.098 "data_offset": 2048, 00:13:35.098 "data_size": 63488 00:13:35.098 } 00:13:35.098 ] 00:13:35.098 }' 00:13:35.098 14:15:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:35.098 14:15:27 -- common/autotest_common.sh@10 -- # set +x 00:13:35.665 14:15:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:35.665 14:15:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:35.665 14:15:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.665 14:15:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:35.923 14:15:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:35.923 14:15:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:35.923 14:15:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:36.181 [2024-11-18 14:15:28.112766] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.181 [2024-11-18 14:15:28.113799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:13:36.181 14:15:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:36.181 14:15:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:36.181 14:15:28 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.181 14:15:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:36.439 14:15:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:36.439 14:15:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:36.439 14:15:28 -- bdev/bdev_raid.sh@287 -- # killprocess 123625 00:13:36.439 14:15:28 -- common/autotest_common.sh@936 -- # '[' -z 123625 ']' 00:13:36.439 14:15:28 -- common/autotest_common.sh@940 -- # kill -0 123625 00:13:36.439 14:15:28 -- common/autotest_common.sh@941 -- # uname 00:13:36.439 14:15:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:36.439 14:15:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123625 00:13:36.439 14:15:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:36.439 14:15:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:36.439 14:15:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123625' 00:13:36.439 killing process with pid 123625 00:13:36.439 14:15:28 -- common/autotest_common.sh@955 -- # kill 123625 00:13:36.439 [2024-11-18 14:15:28.401593] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.439 14:15:28 -- common/autotest_common.sh@960 -- # wait 123625 00:13:36.439 [2024-11-18 14:15:28.401860] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:36.697 00:13:36.697 real 0m9.208s 00:13:36.697 user 0m16.667s 00:13:36.697 sys 0m1.174s 00:13:36.697 14:15:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:36.697 14:15:28 -- common/autotest_common.sh@10 -- # set +x 00:13:36.697 ************************************ 00:13:36.697 END TEST raid_state_function_test_sb 00:13:36.697 ************************************ 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:13:36.697 14:15:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:36.697 14:15:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:36.697 14:15:28 -- common/autotest_common.sh@10 -- # set +x 00:13:36.697 ************************************ 00:13:36.697 START TEST raid_superblock_test 00:13:36.697 ************************************ 00:13:36.697 14:15:28 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=123937 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123937 /var/tmp/spdk-raid.sock 00:13:36.697 14:15:28 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:36.697 14:15:28 -- common/autotest_common.sh@829 -- # '[' -z 123937 ']' 00:13:36.697 14:15:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:36.697 14:15:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:36.697 14:15:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:36.697 14:15:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.697 14:15:28 -- common/autotest_common.sh@10 -- # set +x 00:13:36.956 [2024-11-18 14:15:28.808945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:36.956 [2024-11-18 14:15:28.809320] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123937 ] 00:13:36.956 [2024-11-18 14:15:28.945892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.956 [2024-11-18 14:15:29.018798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.215 [2024-11-18 14:15:29.089263] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.784 14:15:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.784 14:15:29 -- common/autotest_common.sh@862 -- # return 0 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:37.784 14:15:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:38.042 malloc1 00:13:38.042 14:15:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.300 [2024-11-18 14:15:30.161953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.300 [2024-11-18 14:15:30.162284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.300 [2024-11-18 14:15:30.162484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:13:38.300 [2024-11-18 14:15:30.162645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.300 [2024-11-18 14:15:30.165147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.300 [2024-11-18 14:15:30.165343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.300 pt1 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.300 14:15:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:38.300 malloc2 00:13:38.559 14:15:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.559 [2024-11-18 14:15:30.555790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.559 [2024-11-18 14:15:30.556019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.559 [2024-11-18 14:15:30.556104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:38.559 [2024-11-18 14:15:30.556287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.559 [2024-11-18 14:15:30.558570] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.559 [2024-11-18 14:15:30.558728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.559 pt2 00:13:38.559 14:15:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:38.559 14:15:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:38.559 14:15:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:13:38.818 [2024-11-18 14:15:30.803887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:38.818 [2024-11-18 14:15:30.806058] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.818 [2024-11-18 14:15:30.806415] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:13:38.818 [2024-11-18 14:15:30.806546] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:38.818 [2024-11-18 14:15:30.806742] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:13:38.818 [2024-11-18 14:15:30.807233] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:13:38.818 [2024-11-18 14:15:30.807381] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:13:38.818 [2024-11-18 14:15:30.807654] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.818 14:15:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.077 14:15:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:39.077 "name": "raid_bdev1", 00:13:39.077 "uuid": "e9c1eba0-0ab9-4f22-af5e-60af7efdef04", 00:13:39.077 "strip_size_kb": 64, 00:13:39.077 "state": "online", 00:13:39.077 "raid_level": "concat", 00:13:39.077 "superblock": true, 00:13:39.077 "num_base_bdevs": 2, 00:13:39.077 "num_base_bdevs_discovered": 2, 00:13:39.077 "num_base_bdevs_operational": 2, 00:13:39.077 "base_bdevs_list": [ 00:13:39.077 { 00:13:39.077 "name": "pt1", 00:13:39.077 "uuid": "cf02b532-f71b-55d6-bcff-ad9f22ba146d", 00:13:39.077 "is_configured": true, 00:13:39.077 "data_offset": 2048, 00:13:39.077 "data_size": 63488 00:13:39.077 }, 00:13:39.077 { 00:13:39.077 "name": "pt2", 00:13:39.077 "uuid": "fcbc0ab6-cd21-5fae-8923-c6ddd020dfcb", 00:13:39.077 "is_configured": true, 00:13:39.077 "data_offset": 2048, 00:13:39.077 "data_size": 63488 00:13:39.077 } 00:13:39.077 ] 00:13:39.077 }' 00:13:39.077 14:15:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:39.077 14:15:31 -- common/autotest_common.sh@10 -- # set +x 00:13:39.644 14:15:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:39.644 14:15:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:39.903 [2024-11-18 14:15:31.824256] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.903 14:15:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e9c1eba0-0ab9-4f22-af5e-60af7efdef04 00:13:39.903 14:15:31 -- bdev/bdev_raid.sh@380 -- # '[' -z e9c1eba0-0ab9-4f22-af5e-60af7efdef04 ']' 00:13:39.903 14:15:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:40.163 [2024-11-18 14:15:32.016229] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.163 [2024-11-18 14:15:32.016413] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.163 [2024-11-18 14:15:32.016696] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.163 [2024-11-18 14:15:32.016924] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.163 [2024-11-18 14:15:32.017074] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:13:40.163 14:15:32 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.163 14:15:32 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:40.163 14:15:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:40.163 14:15:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:40.163 14:15:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:40.163 14:15:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:40.422 14:15:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:40.422 14:15:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:40.681 14:15:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:40.681 14:15:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:40.940 14:15:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:40.940 14:15:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:40.940 14:15:32 -- common/autotest_common.sh@650 -- # local es=0 00:13:40.940 14:15:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:40.940 14:15:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.940 14:15:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.940 14:15:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.940 14:15:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.940 14:15:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.940 14:15:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.940 14:15:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:40.940 14:15:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:40.940 14:15:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:13:41.198 [2024-11-18 14:15:33.108305] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:41.198 [2024-11-18 14:15:33.110445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:41.198 [2024-11-18 14:15:33.110640] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:41.198 [2024-11-18 14:15:33.110844] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:41.198 [2024-11-18 14:15:33.110935] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.198 [2024-11-18 14:15:33.111041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:13:41.198 request: 00:13:41.198 { 00:13:41.198 "name": "raid_bdev1", 00:13:41.198 "raid_level": "concat", 00:13:41.198 "base_bdevs": [ 00:13:41.198 "malloc1", 00:13:41.198 "malloc2" 00:13:41.198 ], 00:13:41.198 "superblock": false, 00:13:41.198 "strip_size_kb": 64, 00:13:41.198 "method": "bdev_raid_create", 00:13:41.198 "req_id": 1 00:13:41.198 } 00:13:41.198 Got JSON-RPC error response 00:13:41.198 response: 00:13:41.198 { 00:13:41.198 "code": -17, 00:13:41.198 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:41.198 } 00:13:41.198 14:15:33 -- common/autotest_common.sh@653 -- # es=1 00:13:41.198 14:15:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.198 14:15:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.198 14:15:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.198 14:15:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.198 14:15:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:41.456 14:15:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:41.456 14:15:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:41.456 14:15:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:41.715 [2024-11-18 14:15:33.532305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:41.715 [2024-11-18 14:15:33.532534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.715 [2024-11-18 14:15:33.532633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:13:41.715 [2024-11-18 14:15:33.532906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.715 [2024-11-18 14:15:33.535358] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.715 [2024-11-18 14:15:33.535546] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:41.715 [2024-11-18 14:15:33.535728] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:41.715 [2024-11-18 14:15:33.535919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:41.715 pt1 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:41.715 "name": "raid_bdev1", 00:13:41.715 "uuid": "e9c1eba0-0ab9-4f22-af5e-60af7efdef04", 00:13:41.715 "strip_size_kb": 64, 00:13:41.715 "state": "configuring", 00:13:41.715 "raid_level": "concat", 00:13:41.715 "superblock": true, 00:13:41.715 "num_base_bdevs": 2, 00:13:41.715 "num_base_bdevs_discovered": 1, 00:13:41.715 "num_base_bdevs_operational": 2, 00:13:41.715 "base_bdevs_list": [ 00:13:41.715 { 00:13:41.715 "name": "pt1", 00:13:41.715 "uuid": "cf02b532-f71b-55d6-bcff-ad9f22ba146d", 00:13:41.715 "is_configured": true, 00:13:41.715 "data_offset": 2048, 00:13:41.715 "data_size": 63488 00:13:41.715 }, 00:13:41.715 { 00:13:41.715 "name": null, 00:13:41.715 "uuid": "fcbc0ab6-cd21-5fae-8923-c6ddd020dfcb", 00:13:41.715 "is_configured": false, 00:13:41.715 "data_offset": 2048, 00:13:41.715 "data_size": 63488 00:13:41.715 } 00:13:41.715 ] 00:13:41.715 }' 00:13:41.715 14:15:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:41.715 14:15:33 -- common/autotest_common.sh@10 -- # set +x 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.653 [2024-11-18 14:15:34.616504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.653 [2024-11-18 14:15:34.616733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.653 [2024-11-18 14:15:34.616810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:13:42.653 [2024-11-18 14:15:34.617102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.653 [2024-11-18 14:15:34.617544] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.653 [2024-11-18 14:15:34.617622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.653 [2024-11-18 14:15:34.617721] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:42.653 [2024-11-18 14:15:34.617785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.653 [2024-11-18 14:15:34.617915] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:13:42.653 [2024-11-18 14:15:34.617961] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:42.653 [2024-11-18 14:15:34.618063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:13:42.653 [2024-11-18 14:15:34.618396] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:13:42.653 [2024-11-18 14:15:34.618618] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:13:42.653 [2024-11-18 14:15:34.618756] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.653 pt2 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.653 14:15:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.913 14:15:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:42.913 "name": "raid_bdev1", 00:13:42.913 "uuid": "e9c1eba0-0ab9-4f22-af5e-60af7efdef04", 00:13:42.913 "strip_size_kb": 64, 00:13:42.913 "state": "online", 00:13:42.913 "raid_level": "concat", 00:13:42.913 "superblock": true, 00:13:42.913 "num_base_bdevs": 2, 00:13:42.913 "num_base_bdevs_discovered": 2, 00:13:42.913 "num_base_bdevs_operational": 2, 00:13:42.913 "base_bdevs_list": [ 00:13:42.913 { 00:13:42.913 "name": "pt1", 00:13:42.913 "uuid": "cf02b532-f71b-55d6-bcff-ad9f22ba146d", 00:13:42.913 "is_configured": true, 00:13:42.913 "data_offset": 2048, 00:13:42.913 "data_size": 63488 00:13:42.913 }, 00:13:42.913 { 00:13:42.913 "name": "pt2", 00:13:42.913 "uuid": "fcbc0ab6-cd21-5fae-8923-c6ddd020dfcb", 00:13:42.913 "is_configured": true, 00:13:42.913 "data_offset": 2048, 00:13:42.913 "data_size": 63488 00:13:42.913 } 00:13:42.913 ] 00:13:42.913 }' 00:13:42.913 14:15:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:42.913 14:15:34 -- common/autotest_common.sh@10 -- # set +x 00:13:43.480 14:15:35 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.480 14:15:35 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:43.739 [2024-11-18 14:15:35.704850] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.739 14:15:35 -- bdev/bdev_raid.sh@430 -- # '[' e9c1eba0-0ab9-4f22-af5e-60af7efdef04 '!=' e9c1eba0-0ab9-4f22-af5e-60af7efdef04 ']' 00:13:43.739 14:15:35 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:13:43.739 14:15:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:43.739 14:15:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:43.739 14:15:35 -- bdev/bdev_raid.sh@511 -- # killprocess 123937 00:13:43.739 14:15:35 -- common/autotest_common.sh@936 -- # '[' -z 123937 ']' 00:13:43.739 14:15:35 -- common/autotest_common.sh@940 -- # kill -0 123937 00:13:43.739 14:15:35 -- common/autotest_common.sh@941 -- # uname 00:13:43.739 14:15:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.739 14:15:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123937 00:13:43.739 killing process with pid 123937 00:13:43.739 14:15:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:43.739 14:15:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:43.739 14:15:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123937' 00:13:43.739 14:15:35 -- common/autotest_common.sh@955 -- # kill 123937 00:13:43.739 [2024-11-18 14:15:35.744625] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.739 14:15:35 -- common/autotest_common.sh@960 -- # wait 123937 00:13:43.739 [2024-11-18 14:15:35.744712] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.739 [2024-11-18 14:15:35.744760] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.739 [2024-11-18 14:15:35.744772] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:13:43.739 [2024-11-18 14:15:35.766954] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.998 ************************************ 00:13:43.998 END TEST raid_superblock_test 00:13:43.998 ************************************ 00:13:43.998 14:15:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:43.998 00:13:43.998 real 0m7.302s 00:13:43.998 user 0m12.992s 00:13:43.998 sys 0m1.007s 00:13:43.998 14:15:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:43.998 14:15:36 -- common/autotest_common.sh@10 -- # set +x 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:44.257 14:15:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:44.257 14:15:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.257 14:15:36 -- common/autotest_common.sh@10 -- # set +x 00:13:44.257 ************************************ 00:13:44.257 START TEST raid_state_function_test 00:13:44.257 ************************************ 00:13:44.257 14:15:36 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=124175 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124175' 00:13:44.257 Process raid pid: 124175 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:44.257 14:15:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124175 /var/tmp/spdk-raid.sock 00:13:44.257 14:15:36 -- common/autotest_common.sh@829 -- # '[' -z 124175 ']' 00:13:44.257 14:15:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.257 14:15:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.257 14:15:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.258 14:15:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.258 14:15:36 -- common/autotest_common.sh@10 -- # set +x 00:13:44.258 [2024-11-18 14:15:36.175690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:44.258 [2024-11-18 14:15:36.176105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.258 [2024-11-18 14:15:36.319225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.516 [2024-11-18 14:15:36.385132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.516 [2024-11-18 14:15:36.455740] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.452 14:15:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.452 14:15:37 -- common/autotest_common.sh@862 -- # return 0 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:45.452 [2024-11-18 14:15:37.410303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:45.452 [2024-11-18 14:15:37.410612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:45.452 [2024-11-18 14:15:37.410726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:45.452 [2024-11-18 14:15:37.410874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.452 14:15:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.711 14:15:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.711 "name": "Existed_Raid", 00:13:45.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.711 "strip_size_kb": 0, 00:13:45.711 "state": "configuring", 00:13:45.711 "raid_level": "raid1", 00:13:45.711 "superblock": false, 00:13:45.711 "num_base_bdevs": 2, 00:13:45.711 "num_base_bdevs_discovered": 0, 00:13:45.711 "num_base_bdevs_operational": 2, 00:13:45.711 "base_bdevs_list": [ 00:13:45.711 { 00:13:45.711 "name": "BaseBdev1", 00:13:45.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.711 "is_configured": false, 00:13:45.711 "data_offset": 0, 00:13:45.711 "data_size": 0 00:13:45.711 }, 00:13:45.711 { 00:13:45.711 "name": "BaseBdev2", 00:13:45.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.711 "is_configured": false, 00:13:45.711 "data_offset": 0, 00:13:45.711 "data_size": 0 00:13:45.711 } 00:13:45.711 ] 00:13:45.711 }' 00:13:45.711 14:15:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.711 14:15:37 -- common/autotest_common.sh@10 -- # set +x 00:13:46.278 14:15:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:46.537 [2024-11-18 14:15:38.442324] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.537 [2024-11-18 14:15:38.442498] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:13:46.537 14:15:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:46.795 [2024-11-18 14:15:38.630371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:46.795 [2024-11-18 14:15:38.630583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:46.795 [2024-11-18 14:15:38.630707] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:46.795 [2024-11-18 14:15:38.630780] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:46.795 14:15:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:47.053 [2024-11-18 14:15:38.896361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.053 BaseBdev1 00:13:47.053 14:15:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:47.053 14:15:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:47.053 14:15:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:47.053 14:15:38 -- common/autotest_common.sh@899 -- # local i 00:13:47.053 14:15:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:47.053 14:15:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:47.053 14:15:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.321 14:15:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:47.321 [ 00:13:47.321 { 00:13:47.321 "name": "BaseBdev1", 00:13:47.321 "aliases": [ 00:13:47.321 "ceb576e2-21ce-427d-96a9-418a2c92bca5" 00:13:47.321 ], 00:13:47.321 "product_name": "Malloc disk", 00:13:47.321 "block_size": 512, 00:13:47.321 "num_blocks": 65536, 00:13:47.321 "uuid": "ceb576e2-21ce-427d-96a9-418a2c92bca5", 00:13:47.321 "assigned_rate_limits": { 00:13:47.321 "rw_ios_per_sec": 0, 00:13:47.321 "rw_mbytes_per_sec": 0, 00:13:47.321 "r_mbytes_per_sec": 0, 00:13:47.321 "w_mbytes_per_sec": 0 00:13:47.321 }, 00:13:47.321 "claimed": true, 00:13:47.321 "claim_type": "exclusive_write", 00:13:47.321 "zoned": false, 00:13:47.321 "supported_io_types": { 00:13:47.321 "read": true, 00:13:47.321 "write": true, 00:13:47.321 "unmap": true, 00:13:47.321 "write_zeroes": true, 00:13:47.321 "flush": true, 00:13:47.321 "reset": true, 00:13:47.321 "compare": false, 00:13:47.321 "compare_and_write": false, 00:13:47.321 "abort": true, 00:13:47.321 "nvme_admin": false, 00:13:47.321 "nvme_io": false 00:13:47.321 }, 00:13:47.321 "memory_domains": [ 00:13:47.321 { 00:13:47.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.321 "dma_device_type": 2 00:13:47.321 } 00:13:47.321 ], 00:13:47.321 "driver_specific": {} 00:13:47.321 } 00:13:47.321 ] 00:13:47.321 14:15:39 -- common/autotest_common.sh@905 -- # return 0 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.321 14:15:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.583 14:15:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:47.583 "name": "Existed_Raid", 00:13:47.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.583 "strip_size_kb": 0, 00:13:47.583 "state": "configuring", 00:13:47.583 "raid_level": "raid1", 00:13:47.583 "superblock": false, 00:13:47.583 "num_base_bdevs": 2, 00:13:47.583 "num_base_bdevs_discovered": 1, 00:13:47.583 "num_base_bdevs_operational": 2, 00:13:47.583 "base_bdevs_list": [ 00:13:47.583 { 00:13:47.583 "name": "BaseBdev1", 00:13:47.583 "uuid": "ceb576e2-21ce-427d-96a9-418a2c92bca5", 00:13:47.584 "is_configured": true, 00:13:47.584 "data_offset": 0, 00:13:47.584 "data_size": 65536 00:13:47.584 }, 00:13:47.584 { 00:13:47.584 "name": "BaseBdev2", 00:13:47.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.584 "is_configured": false, 00:13:47.584 "data_offset": 0, 00:13:47.584 "data_size": 0 00:13:47.584 } 00:13:47.584 ] 00:13:47.584 }' 00:13:47.584 14:15:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:47.584 14:15:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.150 14:15:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:48.409 [2024-11-18 14:15:40.356600] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.409 [2024-11-18 14:15:40.356791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:48.409 14:15:40 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:48.409 14:15:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:48.667 [2024-11-18 14:15:40.576703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.667 [2024-11-18 14:15:40.578872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.667 [2024-11-18 14:15:40.579063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.667 14:15:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.925 14:15:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:48.925 "name": "Existed_Raid", 00:13:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.925 "strip_size_kb": 0, 00:13:48.925 "state": "configuring", 00:13:48.925 "raid_level": "raid1", 00:13:48.925 "superblock": false, 00:13:48.925 "num_base_bdevs": 2, 00:13:48.925 "num_base_bdevs_discovered": 1, 00:13:48.925 "num_base_bdevs_operational": 2, 00:13:48.925 "base_bdevs_list": [ 00:13:48.925 { 00:13:48.925 "name": "BaseBdev1", 00:13:48.925 "uuid": "ceb576e2-21ce-427d-96a9-418a2c92bca5", 00:13:48.925 "is_configured": true, 00:13:48.925 "data_offset": 0, 00:13:48.925 "data_size": 65536 00:13:48.925 }, 00:13:48.925 { 00:13:48.925 "name": "BaseBdev2", 00:13:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.925 "is_configured": false, 00:13:48.925 "data_offset": 0, 00:13:48.925 "data_size": 0 00:13:48.925 } 00:13:48.925 ] 00:13:48.925 }' 00:13:48.925 14:15:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:48.925 14:15:40 -- common/autotest_common.sh@10 -- # set +x 00:13:49.492 14:15:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.751 [2024-11-18 14:15:41.672909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.751 [2024-11-18 14:15:41.673241] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:13:49.751 [2024-11-18 14:15:41.673509] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:49.751 [2024-11-18 14:15:41.673995] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:13:49.751 [2024-11-18 14:15:41.674587] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:13:49.751 [2024-11-18 14:15:41.674731] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:13:49.751 [2024-11-18 14:15:41.675205] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.751 BaseBdev2 00:13:49.751 14:15:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:49.751 14:15:41 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:49.751 14:15:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:49.751 14:15:41 -- common/autotest_common.sh@899 -- # local i 00:13:49.751 14:15:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:49.751 14:15:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:49.751 14:15:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:50.010 14:15:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.269 [ 00:13:50.269 { 00:13:50.269 "name": "BaseBdev2", 00:13:50.269 "aliases": [ 00:13:50.269 "711629a6-02cd-4fee-a746-5004786bde42" 00:13:50.269 ], 00:13:50.269 "product_name": "Malloc disk", 00:13:50.269 "block_size": 512, 00:13:50.269 "num_blocks": 65536, 00:13:50.269 "uuid": "711629a6-02cd-4fee-a746-5004786bde42", 00:13:50.269 "assigned_rate_limits": { 00:13:50.269 "rw_ios_per_sec": 0, 00:13:50.269 "rw_mbytes_per_sec": 0, 00:13:50.269 "r_mbytes_per_sec": 0, 00:13:50.269 "w_mbytes_per_sec": 0 00:13:50.269 }, 00:13:50.269 "claimed": true, 00:13:50.269 "claim_type": "exclusive_write", 00:13:50.269 "zoned": false, 00:13:50.269 "supported_io_types": { 00:13:50.269 "read": true, 00:13:50.269 "write": true, 00:13:50.269 "unmap": true, 00:13:50.269 "write_zeroes": true, 00:13:50.269 "flush": true, 00:13:50.269 "reset": true, 00:13:50.269 "compare": false, 00:13:50.269 "compare_and_write": false, 00:13:50.269 "abort": true, 00:13:50.269 "nvme_admin": false, 00:13:50.269 "nvme_io": false 00:13:50.269 }, 00:13:50.269 "memory_domains": [ 00:13:50.269 { 00:13:50.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.269 "dma_device_type": 2 00:13:50.269 } 00:13:50.269 ], 00:13:50.269 "driver_specific": {} 00:13:50.269 } 00:13:50.269 ] 00:13:50.269 14:15:42 -- common/autotest_common.sh@905 -- # return 0 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.269 14:15:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.529 14:15:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:50.529 "name": "Existed_Raid", 00:13:50.529 "uuid": "0560acd6-cfb3-401c-84fd-402cf526d478", 00:13:50.529 "strip_size_kb": 0, 00:13:50.529 "state": "online", 00:13:50.529 "raid_level": "raid1", 00:13:50.529 "superblock": false, 00:13:50.529 "num_base_bdevs": 2, 00:13:50.529 "num_base_bdevs_discovered": 2, 00:13:50.529 "num_base_bdevs_operational": 2, 00:13:50.529 "base_bdevs_list": [ 00:13:50.529 { 00:13:50.529 "name": "BaseBdev1", 00:13:50.529 "uuid": "ceb576e2-21ce-427d-96a9-418a2c92bca5", 00:13:50.529 "is_configured": true, 00:13:50.529 "data_offset": 0, 00:13:50.529 "data_size": 65536 00:13:50.529 }, 00:13:50.529 { 00:13:50.529 "name": "BaseBdev2", 00:13:50.529 "uuid": "711629a6-02cd-4fee-a746-5004786bde42", 00:13:50.529 "is_configured": true, 00:13:50.529 "data_offset": 0, 00:13:50.529 "data_size": 65536 00:13:50.529 } 00:13:50.529 ] 00:13:50.529 }' 00:13:50.529 14:15:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:50.529 14:15:42 -- common/autotest_common.sh@10 -- # set +x 00:13:51.097 14:15:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:51.356 [2024-11-18 14:15:43.221356] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.356 14:15:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.615 14:15:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:51.615 "name": "Existed_Raid", 00:13:51.615 "uuid": "0560acd6-cfb3-401c-84fd-402cf526d478", 00:13:51.615 "strip_size_kb": 0, 00:13:51.615 "state": "online", 00:13:51.615 "raid_level": "raid1", 00:13:51.615 "superblock": false, 00:13:51.615 "num_base_bdevs": 2, 00:13:51.615 "num_base_bdevs_discovered": 1, 00:13:51.615 "num_base_bdevs_operational": 1, 00:13:51.615 "base_bdevs_list": [ 00:13:51.615 { 00:13:51.615 "name": null, 00:13:51.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.615 "is_configured": false, 00:13:51.615 "data_offset": 0, 00:13:51.615 "data_size": 65536 00:13:51.615 }, 00:13:51.615 { 00:13:51.615 "name": "BaseBdev2", 00:13:51.615 "uuid": "711629a6-02cd-4fee-a746-5004786bde42", 00:13:51.615 "is_configured": true, 00:13:51.615 "data_offset": 0, 00:13:51.615 "data_size": 65536 00:13:51.615 } 00:13:51.615 ] 00:13:51.615 }' 00:13:51.615 14:15:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:51.615 14:15:43 -- common/autotest_common.sh@10 -- # set +x 00:13:52.182 14:15:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:52.182 14:15:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:52.182 14:15:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.182 14:15:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:52.441 14:15:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:52.441 14:15:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:52.441 14:15:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:52.700 [2024-11-18 14:15:44.555948] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.700 [2024-11-18 14:15:44.556100] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:52.700 [2024-11-18 14:15:44.556301] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.700 [2024-11-18 14:15:44.566548] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.700 [2024-11-18 14:15:44.566730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:13:52.700 14:15:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:52.700 14:15:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:52.700 14:15:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.700 14:15:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.959 14:15:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:52.959 14:15:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:52.959 14:15:44 -- bdev/bdev_raid.sh@287 -- # killprocess 124175 00:13:52.959 14:15:44 -- common/autotest_common.sh@936 -- # '[' -z 124175 ']' 00:13:52.959 14:15:44 -- common/autotest_common.sh@940 -- # kill -0 124175 00:13:52.959 14:15:44 -- common/autotest_common.sh@941 -- # uname 00:13:52.959 14:15:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:52.959 14:15:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124175 00:13:52.959 killing process with pid 124175 00:13:52.959 14:15:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:52.959 14:15:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:52.959 14:15:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124175' 00:13:52.959 14:15:44 -- common/autotest_common.sh@955 -- # kill 124175 00:13:52.959 [2024-11-18 14:15:44.860003] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.959 14:15:44 -- common/autotest_common.sh@960 -- # wait 124175 00:13:52.959 [2024-11-18 14:15:44.860082] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.218 ************************************ 00:13:53.218 END TEST raid_state_function_test 00:13:53.218 ************************************ 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:53.218 00:13:53.218 real 0m8.965s 00:13:53.218 user 0m16.359s 00:13:53.218 sys 0m1.079s 00:13:53.218 14:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:53.218 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:53.218 14:15:45 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:53.218 14:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.218 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:13:53.218 ************************************ 00:13:53.218 START TEST raid_state_function_test_sb 00:13:53.218 ************************************ 00:13:53.218 14:15:45 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@226 -- # raid_pid=124485 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124485' 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:53.218 Process raid pid: 124485 00:13:53.218 14:15:45 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124485 /var/tmp/spdk-raid.sock 00:13:53.218 14:15:45 -- common/autotest_common.sh@829 -- # '[' -z 124485 ']' 00:13:53.218 14:15:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.218 14:15:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.218 14:15:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.218 14:15:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.218 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:13:53.218 [2024-11-18 14:15:45.203998] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:53.219 [2024-11-18 14:15:45.204408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.478 [2024-11-18 14:15:45.353930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.478 [2024-11-18 14:15:45.440575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.478 [2024-11-18 14:15:45.523038] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.414 14:15:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.414 14:15:46 -- common/autotest_common.sh@862 -- # return 0 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:54.414 [2024-11-18 14:15:46.333850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.414 [2024-11-18 14:15:46.334121] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.414 [2024-11-18 14:15:46.334234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.414 [2024-11-18 14:15:46.334293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.414 14:15:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.673 14:15:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:54.673 "name": "Existed_Raid", 00:13:54.673 "uuid": "9c1d1502-25c7-4904-8ef0-8ba97f7a6686", 00:13:54.673 "strip_size_kb": 0, 00:13:54.673 "state": "configuring", 00:13:54.673 "raid_level": "raid1", 00:13:54.673 "superblock": true, 00:13:54.673 "num_base_bdevs": 2, 00:13:54.673 "num_base_bdevs_discovered": 0, 00:13:54.673 "num_base_bdevs_operational": 2, 00:13:54.673 "base_bdevs_list": [ 00:13:54.673 { 00:13:54.673 "name": "BaseBdev1", 00:13:54.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.673 "is_configured": false, 00:13:54.673 "data_offset": 0, 00:13:54.673 "data_size": 0 00:13:54.673 }, 00:13:54.673 { 00:13:54.673 "name": "BaseBdev2", 00:13:54.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.673 "is_configured": false, 00:13:54.673 "data_offset": 0, 00:13:54.673 "data_size": 0 00:13:54.673 } 00:13:54.673 ] 00:13:54.673 }' 00:13:54.673 14:15:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:54.673 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:13:55.242 14:15:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.501 [2024-11-18 14:15:47.333855] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.501 [2024-11-18 14:15:47.334008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:13:55.501 14:15:47 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:55.759 [2024-11-18 14:15:47.581929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.759 [2024-11-18 14:15:47.582127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.759 [2024-11-18 14:15:47.582249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.759 [2024-11-18 14:15:47.582315] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.759 14:15:47 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.017 [2024-11-18 14:15:47.835653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.017 BaseBdev1 00:13:56.017 14:15:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:56.017 14:15:47 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:56.017 14:15:47 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:56.017 14:15:47 -- common/autotest_common.sh@899 -- # local i 00:13:56.017 14:15:47 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:56.017 14:15:47 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:56.017 14:15:47 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.017 14:15:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.276 [ 00:13:56.276 { 00:13:56.276 "name": "BaseBdev1", 00:13:56.276 "aliases": [ 00:13:56.276 "04505e91-f4d3-499c-a030-7b99617aed89" 00:13:56.276 ], 00:13:56.276 "product_name": "Malloc disk", 00:13:56.276 "block_size": 512, 00:13:56.276 "num_blocks": 65536, 00:13:56.276 "uuid": "04505e91-f4d3-499c-a030-7b99617aed89", 00:13:56.276 "assigned_rate_limits": { 00:13:56.276 "rw_ios_per_sec": 0, 00:13:56.276 "rw_mbytes_per_sec": 0, 00:13:56.276 "r_mbytes_per_sec": 0, 00:13:56.276 "w_mbytes_per_sec": 0 00:13:56.276 }, 00:13:56.276 "claimed": true, 00:13:56.276 "claim_type": "exclusive_write", 00:13:56.276 "zoned": false, 00:13:56.276 "supported_io_types": { 00:13:56.276 "read": true, 00:13:56.276 "write": true, 00:13:56.276 "unmap": true, 00:13:56.276 "write_zeroes": true, 00:13:56.276 "flush": true, 00:13:56.276 "reset": true, 00:13:56.276 "compare": false, 00:13:56.276 "compare_and_write": false, 00:13:56.276 "abort": true, 00:13:56.276 "nvme_admin": false, 00:13:56.276 "nvme_io": false 00:13:56.276 }, 00:13:56.276 "memory_domains": [ 00:13:56.276 { 00:13:56.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.276 "dma_device_type": 2 00:13:56.276 } 00:13:56.276 ], 00:13:56.276 "driver_specific": {} 00:13:56.276 } 00:13:56.276 ] 00:13:56.276 14:15:48 -- common/autotest_common.sh@905 -- # return 0 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.276 14:15:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.533 14:15:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.533 "name": "Existed_Raid", 00:13:56.533 "uuid": "cbcad885-f7ec-4638-a220-db7fc150529f", 00:13:56.533 "strip_size_kb": 0, 00:13:56.533 "state": "configuring", 00:13:56.533 "raid_level": "raid1", 00:13:56.533 "superblock": true, 00:13:56.533 "num_base_bdevs": 2, 00:13:56.533 "num_base_bdevs_discovered": 1, 00:13:56.533 "num_base_bdevs_operational": 2, 00:13:56.533 "base_bdevs_list": [ 00:13:56.533 { 00:13:56.533 "name": "BaseBdev1", 00:13:56.533 "uuid": "04505e91-f4d3-499c-a030-7b99617aed89", 00:13:56.533 "is_configured": true, 00:13:56.533 "data_offset": 2048, 00:13:56.533 "data_size": 63488 00:13:56.533 }, 00:13:56.533 { 00:13:56.533 "name": "BaseBdev2", 00:13:56.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.533 "is_configured": false, 00:13:56.533 "data_offset": 0, 00:13:56.533 "data_size": 0 00:13:56.533 } 00:13:56.533 ] 00:13:56.533 }' 00:13:56.533 14:15:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.533 14:15:48 -- common/autotest_common.sh@10 -- # set +x 00:13:57.469 14:15:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:57.469 [2024-11-18 14:15:49.435930] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.469 [2024-11-18 14:15:49.436092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:57.469 14:15:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:57.469 14:15:49 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:57.749 14:15:49 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:58.023 BaseBdev1 00:13:58.023 14:15:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:58.023 14:15:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:58.023 14:15:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:58.023 14:15:49 -- common/autotest_common.sh@899 -- # local i 00:13:58.023 14:15:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:58.023 14:15:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:58.023 14:15:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.280 14:15:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.280 [ 00:13:58.280 { 00:13:58.280 "name": "BaseBdev1", 00:13:58.280 "aliases": [ 00:13:58.280 "cfa147d7-d73c-4f09-91aa-c59e1d3b8ee7" 00:13:58.280 ], 00:13:58.280 "product_name": "Malloc disk", 00:13:58.280 "block_size": 512, 00:13:58.280 "num_blocks": 65536, 00:13:58.280 "uuid": "cfa147d7-d73c-4f09-91aa-c59e1d3b8ee7", 00:13:58.280 "assigned_rate_limits": { 00:13:58.280 "rw_ios_per_sec": 0, 00:13:58.280 "rw_mbytes_per_sec": 0, 00:13:58.280 "r_mbytes_per_sec": 0, 00:13:58.280 "w_mbytes_per_sec": 0 00:13:58.280 }, 00:13:58.280 "claimed": false, 00:13:58.280 "zoned": false, 00:13:58.280 "supported_io_types": { 00:13:58.280 "read": true, 00:13:58.280 "write": true, 00:13:58.280 "unmap": true, 00:13:58.280 "write_zeroes": true, 00:13:58.280 "flush": true, 00:13:58.280 "reset": true, 00:13:58.280 "compare": false, 00:13:58.280 "compare_and_write": false, 00:13:58.280 "abort": true, 00:13:58.280 "nvme_admin": false, 00:13:58.280 "nvme_io": false 00:13:58.280 }, 00:13:58.280 "memory_domains": [ 00:13:58.280 { 00:13:58.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.280 "dma_device_type": 2 00:13:58.280 } 00:13:58.280 ], 00:13:58.280 "driver_specific": {} 00:13:58.280 } 00:13:58.280 ] 00:13:58.538 14:15:50 -- common/autotest_common.sh@905 -- # return 0 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:58.538 [2024-11-18 14:15:50.551409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.538 [2024-11-18 14:15:50.553445] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.538 [2024-11-18 14:15:50.553628] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.538 14:15:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.795 14:15:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.795 "name": "Existed_Raid", 00:13:58.795 "uuid": "daa8191a-a700-4f91-8feb-f49000de75bb", 00:13:58.795 "strip_size_kb": 0, 00:13:58.795 "state": "configuring", 00:13:58.795 "raid_level": "raid1", 00:13:58.795 "superblock": true, 00:13:58.795 "num_base_bdevs": 2, 00:13:58.795 "num_base_bdevs_discovered": 1, 00:13:58.795 "num_base_bdevs_operational": 2, 00:13:58.795 "base_bdevs_list": [ 00:13:58.795 { 00:13:58.795 "name": "BaseBdev1", 00:13:58.795 "uuid": "cfa147d7-d73c-4f09-91aa-c59e1d3b8ee7", 00:13:58.795 "is_configured": true, 00:13:58.795 "data_offset": 2048, 00:13:58.795 "data_size": 63488 00:13:58.795 }, 00:13:58.795 { 00:13:58.795 "name": "BaseBdev2", 00:13:58.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.795 "is_configured": false, 00:13:58.795 "data_offset": 0, 00:13:58.795 "data_size": 0 00:13:58.795 } 00:13:58.795 ] 00:13:58.795 }' 00:13:58.795 14:15:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.795 14:15:50 -- common/autotest_common.sh@10 -- # set +x 00:13:59.726 14:15:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:59.726 [2024-11-18 14:15:51.720152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.726 [2024-11-18 14:15:51.720594] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:13:59.726 [2024-11-18 14:15:51.720757] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:59.726 [2024-11-18 14:15:51.720972] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:13:59.726 BaseBdev2 00:13:59.726 [2024-11-18 14:15:51.721656] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:13:59.726 [2024-11-18 14:15:51.721677] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:13:59.726 [2024-11-18 14:15:51.721890] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.726 14:15:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:59.726 14:15:51 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:59.726 14:15:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.726 14:15:51 -- common/autotest_common.sh@899 -- # local i 00:13:59.726 14:15:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.726 14:15:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.726 14:15:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.984 14:15:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:00.242 [ 00:14:00.242 { 00:14:00.242 "name": "BaseBdev2", 00:14:00.242 "aliases": [ 00:14:00.242 "6b5a27ad-88ee-44ce-ba24-e9cb6c780249" 00:14:00.242 ], 00:14:00.242 "product_name": "Malloc disk", 00:14:00.242 "block_size": 512, 00:14:00.242 "num_blocks": 65536, 00:14:00.242 "uuid": "6b5a27ad-88ee-44ce-ba24-e9cb6c780249", 00:14:00.242 "assigned_rate_limits": { 00:14:00.242 "rw_ios_per_sec": 0, 00:14:00.242 "rw_mbytes_per_sec": 0, 00:14:00.242 "r_mbytes_per_sec": 0, 00:14:00.242 "w_mbytes_per_sec": 0 00:14:00.242 }, 00:14:00.242 "claimed": true, 00:14:00.242 "claim_type": "exclusive_write", 00:14:00.242 "zoned": false, 00:14:00.242 "supported_io_types": { 00:14:00.242 "read": true, 00:14:00.242 "write": true, 00:14:00.242 "unmap": true, 00:14:00.242 "write_zeroes": true, 00:14:00.242 "flush": true, 00:14:00.242 "reset": true, 00:14:00.242 "compare": false, 00:14:00.242 "compare_and_write": false, 00:14:00.242 "abort": true, 00:14:00.242 "nvme_admin": false, 00:14:00.242 "nvme_io": false 00:14:00.242 }, 00:14:00.242 "memory_domains": [ 00:14:00.242 { 00:14:00.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.242 "dma_device_type": 2 00:14:00.242 } 00:14:00.242 ], 00:14:00.242 "driver_specific": {} 00:14:00.242 } 00:14:00.242 ] 00:14:00.242 14:15:52 -- common/autotest_common.sh@905 -- # return 0 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.242 14:15:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.501 14:15:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.501 "name": "Existed_Raid", 00:14:00.501 "uuid": "daa8191a-a700-4f91-8feb-f49000de75bb", 00:14:00.501 "strip_size_kb": 0, 00:14:00.501 "state": "online", 00:14:00.501 "raid_level": "raid1", 00:14:00.501 "superblock": true, 00:14:00.501 "num_base_bdevs": 2, 00:14:00.501 "num_base_bdevs_discovered": 2, 00:14:00.501 "num_base_bdevs_operational": 2, 00:14:00.501 "base_bdevs_list": [ 00:14:00.501 { 00:14:00.501 "name": "BaseBdev1", 00:14:00.501 "uuid": "cfa147d7-d73c-4f09-91aa-c59e1d3b8ee7", 00:14:00.501 "is_configured": true, 00:14:00.501 "data_offset": 2048, 00:14:00.501 "data_size": 63488 00:14:00.501 }, 00:14:00.501 { 00:14:00.501 "name": "BaseBdev2", 00:14:00.501 "uuid": "6b5a27ad-88ee-44ce-ba24-e9cb6c780249", 00:14:00.501 "is_configured": true, 00:14:00.501 "data_offset": 2048, 00:14:00.501 "data_size": 63488 00:14:00.501 } 00:14:00.501 ] 00:14:00.501 }' 00:14:00.501 14:15:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.501 14:15:52 -- common/autotest_common.sh@10 -- # set +x 00:14:01.068 14:15:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:01.327 [2024-11-18 14:15:53.323977] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.327 14:15:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.586 14:15:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:01.586 "name": "Existed_Raid", 00:14:01.586 "uuid": "daa8191a-a700-4f91-8feb-f49000de75bb", 00:14:01.586 "strip_size_kb": 0, 00:14:01.586 "state": "online", 00:14:01.586 "raid_level": "raid1", 00:14:01.586 "superblock": true, 00:14:01.586 "num_base_bdevs": 2, 00:14:01.586 "num_base_bdevs_discovered": 1, 00:14:01.586 "num_base_bdevs_operational": 1, 00:14:01.586 "base_bdevs_list": [ 00:14:01.586 { 00:14:01.586 "name": null, 00:14:01.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.586 "is_configured": false, 00:14:01.586 "data_offset": 2048, 00:14:01.586 "data_size": 63488 00:14:01.586 }, 00:14:01.586 { 00:14:01.586 "name": "BaseBdev2", 00:14:01.586 "uuid": "6b5a27ad-88ee-44ce-ba24-e9cb6c780249", 00:14:01.586 "is_configured": true, 00:14:01.586 "data_offset": 2048, 00:14:01.586 "data_size": 63488 00:14:01.586 } 00:14:01.586 ] 00:14:01.586 }' 00:14:01.586 14:15:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:01.586 14:15:53 -- common/autotest_common.sh@10 -- # set +x 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.537 14:15:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:02.796 [2024-11-18 14:15:54.689975] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.796 [2024-11-18 14:15:54.690217] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.796 [2024-11-18 14:15:54.690526] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.796 [2024-11-18 14:15:54.700798] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.796 [2024-11-18 14:15:54.701065] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:14:02.796 14:15:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:02.796 14:15:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:02.796 14:15:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.796 14:15:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:03.054 14:15:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:03.054 14:15:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:03.054 14:15:54 -- bdev/bdev_raid.sh@287 -- # killprocess 124485 00:14:03.054 14:15:54 -- common/autotest_common.sh@936 -- # '[' -z 124485 ']' 00:14:03.054 14:15:54 -- common/autotest_common.sh@940 -- # kill -0 124485 00:14:03.054 14:15:54 -- common/autotest_common.sh@941 -- # uname 00:14:03.054 14:15:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.054 14:15:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124485 00:14:03.054 14:15:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:03.054 14:15:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:03.054 14:15:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124485' 00:14:03.054 killing process with pid 124485 00:14:03.054 14:15:54 -- common/autotest_common.sh@955 -- # kill 124485 00:14:03.054 14:15:54 -- common/autotest_common.sh@960 -- # wait 124485 00:14:03.054 [2024-11-18 14:15:54.996326] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.054 [2024-11-18 14:15:54.996578] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:03.314 ************************************ 00:14:03.314 END TEST raid_state_function_test_sb 00:14:03.314 ************************************ 00:14:03.314 00:14:03.314 real 0m10.076s 00:14:03.314 user 0m18.285s 00:14:03.314 sys 0m1.333s 00:14:03.314 14:15:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:03.314 14:15:55 -- common/autotest_common.sh@10 -- # set +x 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:14:03.314 14:15:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:03.314 14:15:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.314 14:15:55 -- common/autotest_common.sh@10 -- # set +x 00:14:03.314 ************************************ 00:14:03.314 START TEST raid_superblock_test 00:14:03.314 ************************************ 00:14:03.314 14:15:55 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@357 -- # raid_pid=124809 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124809 /var/tmp/spdk-raid.sock 00:14:03.314 14:15:55 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:03.314 14:15:55 -- common/autotest_common.sh@829 -- # '[' -z 124809 ']' 00:14:03.314 14:15:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:03.314 14:15:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.314 14:15:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:03.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:03.314 14:15:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.314 14:15:55 -- common/autotest_common.sh@10 -- # set +x 00:14:03.314 [2024-11-18 14:15:55.330472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:03.314 [2024-11-18 14:15:55.330802] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124809 ] 00:14:03.573 [2024-11-18 14:15:55.472062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.573 [2024-11-18 14:15:55.567721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.832 [2024-11-18 14:15:55.649167] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.399 14:15:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:04.399 14:15:56 -- common/autotest_common.sh@862 -- # return 0 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.399 14:15:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:04.658 malloc1 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.658 [2024-11-18 14:15:56.675058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.658 [2024-11-18 14:15:56.675418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.658 [2024-11-18 14:15:56.675615] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:04.658 [2024-11-18 14:15:56.675798] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.658 [2024-11-18 14:15:56.678354] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.658 [2024-11-18 14:15:56.678536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.658 pt1 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.658 14:15:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:04.917 malloc2 00:14:04.917 14:15:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:05.176 [2024-11-18 14:15:57.148654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:05.176 [2024-11-18 14:15:57.148935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.176 [2024-11-18 14:15:57.149091] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:05.176 [2024-11-18 14:15:57.149251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.176 [2024-11-18 14:15:57.151539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.176 [2024-11-18 14:15:57.151717] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:05.176 pt2 00:14:05.176 14:15:57 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:05.176 14:15:57 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:05.176 14:15:57 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:05.435 [2024-11-18 14:15:57.348740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.435 [2024-11-18 14:15:57.350862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:05.435 [2024-11-18 14:15:57.351200] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:14:05.435 [2024-11-18 14:15:57.351321] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.435 [2024-11-18 14:15:57.351507] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:05.435 [2024-11-18 14:15:57.352078] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:14:05.435 [2024-11-18 14:15:57.352228] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:14:05.435 [2024-11-18 14:15:57.352486] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.435 14:15:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.694 14:15:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.694 "name": "raid_bdev1", 00:14:05.694 "uuid": "8f3a675d-c99c-4b1b-975c-ab5ff60e1b31", 00:14:05.694 "strip_size_kb": 0, 00:14:05.694 "state": "online", 00:14:05.694 "raid_level": "raid1", 00:14:05.694 "superblock": true, 00:14:05.694 "num_base_bdevs": 2, 00:14:05.694 "num_base_bdevs_discovered": 2, 00:14:05.694 "num_base_bdevs_operational": 2, 00:14:05.694 "base_bdevs_list": [ 00:14:05.694 { 00:14:05.694 "name": "pt1", 00:14:05.694 "uuid": "ba6e14bd-8be5-51c4-83fc-f0077bcc3df3", 00:14:05.694 "is_configured": true, 00:14:05.694 "data_offset": 2048, 00:14:05.694 "data_size": 63488 00:14:05.694 }, 00:14:05.694 { 00:14:05.694 "name": "pt2", 00:14:05.694 "uuid": "531827f5-b706-5bb2-a0d2-9e720f5559a7", 00:14:05.694 "is_configured": true, 00:14:05.694 "data_offset": 2048, 00:14:05.694 "data_size": 63488 00:14:05.694 } 00:14:05.694 ] 00:14:05.694 }' 00:14:05.694 14:15:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.694 14:15:57 -- common/autotest_common.sh@10 -- # set +x 00:14:06.262 14:15:58 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:06.262 14:15:58 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:06.521 [2024-11-18 14:15:58.357068] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.521 14:15:58 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8f3a675d-c99c-4b1b-975c-ab5ff60e1b31 00:14:06.521 14:15:58 -- bdev/bdev_raid.sh@380 -- # '[' -z 8f3a675d-c99c-4b1b-975c-ab5ff60e1b31 ']' 00:14:06.521 14:15:58 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:06.779 [2024-11-18 14:15:58.616909] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.779 [2024-11-18 14:15:58.617050] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.779 [2024-11-18 14:15:58.617248] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.779 [2024-11-18 14:15:58.617446] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.779 [2024-11-18 14:15:58.617559] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:14:06.779 14:15:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.779 14:15:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:07.038 14:15:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:07.038 14:15:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:07.038 14:15:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.038 14:15:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:07.301 14:15:59 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.301 14:15:59 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:07.301 14:15:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:07.301 14:15:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:07.561 14:15:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:07.561 14:15:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:07.561 14:15:59 -- common/autotest_common.sh@650 -- # local es=0 00:14:07.561 14:15:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:07.561 14:15:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.561 14:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.561 14:15:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.561 14:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.561 14:15:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.561 14:15:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.561 14:15:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:07.561 14:15:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:07.561 14:15:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:14:07.820 [2024-11-18 14:15:59.661061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:07.820 [2024-11-18 14:15:59.663214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:07.820 [2024-11-18 14:15:59.663389] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:07.820 [2024-11-18 14:15:59.663566] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:07.820 [2024-11-18 14:15:59.663641] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.820 [2024-11-18 14:15:59.663730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:14:07.820 request: 00:14:07.820 { 00:14:07.820 "name": "raid_bdev1", 00:14:07.820 "raid_level": "raid1", 00:14:07.820 "base_bdevs": [ 00:14:07.820 "malloc1", 00:14:07.820 "malloc2" 00:14:07.820 ], 00:14:07.820 "superblock": false, 00:14:07.820 "method": "bdev_raid_create", 00:14:07.821 "req_id": 1 00:14:07.821 } 00:14:07.821 Got JSON-RPC error response 00:14:07.821 response: 00:14:07.821 { 00:14:07.821 "code": -17, 00:14:07.821 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:07.821 } 00:14:07.821 14:15:59 -- common/autotest_common.sh@653 -- # es=1 00:14:07.821 14:15:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.821 14:15:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.821 14:15:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.821 14:15:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.821 14:15:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:08.079 14:15:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:08.079 14:15:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:08.079 14:15:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:08.079 [2024-11-18 14:16:00.085078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:08.079 [2024-11-18 14:16:00.085293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.079 [2024-11-18 14:16:00.085361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:08.079 [2024-11-18 14:16:00.085689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.079 [2024-11-18 14:16:00.088019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.079 [2024-11-18 14:16:00.088188] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:08.079 [2024-11-18 14:16:00.088368] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:08.079 [2024-11-18 14:16:00.088527] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:08.079 pt1 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.079 14:16:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.338 14:16:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.338 "name": "raid_bdev1", 00:14:08.338 "uuid": "8f3a675d-c99c-4b1b-975c-ab5ff60e1b31", 00:14:08.338 "strip_size_kb": 0, 00:14:08.338 "state": "configuring", 00:14:08.338 "raid_level": "raid1", 00:14:08.338 "superblock": true, 00:14:08.338 "num_base_bdevs": 2, 00:14:08.338 "num_base_bdevs_discovered": 1, 00:14:08.338 "num_base_bdevs_operational": 2, 00:14:08.338 "base_bdevs_list": [ 00:14:08.338 { 00:14:08.338 "name": "pt1", 00:14:08.338 "uuid": "ba6e14bd-8be5-51c4-83fc-f0077bcc3df3", 00:14:08.338 "is_configured": true, 00:14:08.338 "data_offset": 2048, 00:14:08.338 "data_size": 63488 00:14:08.338 }, 00:14:08.338 { 00:14:08.338 "name": null, 00:14:08.338 "uuid": "531827f5-b706-5bb2-a0d2-9e720f5559a7", 00:14:08.338 "is_configured": false, 00:14:08.338 "data_offset": 2048, 00:14:08.338 "data_size": 63488 00:14:08.338 } 00:14:08.338 ] 00:14:08.338 }' 00:14:08.338 14:16:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.338 14:16:00 -- common/autotest_common.sh@10 -- # set +x 00:14:09.274 14:16:00 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:09.274 14:16:00 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:09.274 14:16:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:09.274 14:16:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:09.274 [2024-11-18 14:16:01.167147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:09.274 [2024-11-18 14:16:01.167385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.274 [2024-11-18 14:16:01.167456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:09.274 [2024-11-18 14:16:01.167760] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.274 [2024-11-18 14:16:01.168152] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.274 [2024-11-18 14:16:01.168216] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:09.274 [2024-11-18 14:16:01.168302] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:09.274 [2024-11-18 14:16:01.168357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.274 [2024-11-18 14:16:01.168478] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:09.274 [2024-11-18 14:16:01.168519] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:09.274 [2024-11-18 14:16:01.168608] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:09.274 [2024-11-18 14:16:01.168900] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:09.274 [2024-11-18 14:16:01.168941] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:09.274 [2024-11-18 14:16:01.169049] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.274 pt2 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.274 14:16:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.532 14:16:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.532 "name": "raid_bdev1", 00:14:09.532 "uuid": "8f3a675d-c99c-4b1b-975c-ab5ff60e1b31", 00:14:09.532 "strip_size_kb": 0, 00:14:09.532 "state": "online", 00:14:09.532 "raid_level": "raid1", 00:14:09.532 "superblock": true, 00:14:09.532 "num_base_bdevs": 2, 00:14:09.532 "num_base_bdevs_discovered": 2, 00:14:09.532 "num_base_bdevs_operational": 2, 00:14:09.532 "base_bdevs_list": [ 00:14:09.532 { 00:14:09.532 "name": "pt1", 00:14:09.532 "uuid": "ba6e14bd-8be5-51c4-83fc-f0077bcc3df3", 00:14:09.532 "is_configured": true, 00:14:09.532 "data_offset": 2048, 00:14:09.532 "data_size": 63488 00:14:09.532 }, 00:14:09.532 { 00:14:09.532 "name": "pt2", 00:14:09.532 "uuid": "531827f5-b706-5bb2-a0d2-9e720f5559a7", 00:14:09.532 "is_configured": true, 00:14:09.532 "data_offset": 2048, 00:14:09.532 "data_size": 63488 00:14:09.532 } 00:14:09.532 ] 00:14:09.532 }' 00:14:09.532 14:16:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.532 14:16:01 -- common/autotest_common.sh@10 -- # set +x 00:14:10.098 14:16:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:10.098 14:16:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:10.357 [2024-11-18 14:16:02.303503] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.357 14:16:02 -- bdev/bdev_raid.sh@430 -- # '[' 8f3a675d-c99c-4b1b-975c-ab5ff60e1b31 '!=' 8f3a675d-c99c-4b1b-975c-ab5ff60e1b31 ']' 00:14:10.357 14:16:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:10.357 14:16:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:10.357 14:16:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:10.357 14:16:02 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:10.615 [2024-11-18 14:16:02.491409] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.615 14:16:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.616 14:16:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.616 14:16:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.875 14:16:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.875 "name": "raid_bdev1", 00:14:10.875 "uuid": "8f3a675d-c99c-4b1b-975c-ab5ff60e1b31", 00:14:10.875 "strip_size_kb": 0, 00:14:10.875 "state": "online", 00:14:10.875 "raid_level": "raid1", 00:14:10.875 "superblock": true, 00:14:10.875 "num_base_bdevs": 2, 00:14:10.875 "num_base_bdevs_discovered": 1, 00:14:10.875 "num_base_bdevs_operational": 1, 00:14:10.875 "base_bdevs_list": [ 00:14:10.875 { 00:14:10.875 "name": null, 00:14:10.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.875 "is_configured": false, 00:14:10.875 "data_offset": 2048, 00:14:10.875 "data_size": 63488 00:14:10.875 }, 00:14:10.875 { 00:14:10.875 "name": "pt2", 00:14:10.875 "uuid": "531827f5-b706-5bb2-a0d2-9e720f5559a7", 00:14:10.875 "is_configured": true, 00:14:10.875 "data_offset": 2048, 00:14:10.875 "data_size": 63488 00:14:10.875 } 00:14:10.875 ] 00:14:10.875 }' 00:14:10.875 14:16:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.875 14:16:02 -- common/autotest_common.sh@10 -- # set +x 00:14:11.442 14:16:03 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:11.442 [2024-11-18 14:16:03.487562] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.442 [2024-11-18 14:16:03.487705] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.442 [2024-11-18 14:16:03.487854] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.442 [2024-11-18 14:16:03.487987] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.442 [2024-11-18 14:16:03.488095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:11.442 14:16:03 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.442 14:16:03 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:14:11.701 14:16:03 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:14:11.701 14:16:03 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:14:11.701 14:16:03 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:14:11.701 14:16:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:11.701 14:16:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:11.959 14:16:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:11.959 14:16:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:11.959 14:16:03 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:14:11.959 14:16:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:11.959 14:16:03 -- bdev/bdev_raid.sh@462 -- # i=1 00:14:11.959 14:16:03 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:12.218 [2024-11-18 14:16:04.111640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:12.218 [2024-11-18 14:16:04.111836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.218 [2024-11-18 14:16:04.111901] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:12.218 [2024-11-18 14:16:04.112212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.218 [2024-11-18 14:16:04.114148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.218 [2024-11-18 14:16:04.114315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:12.218 [2024-11-18 14:16:04.114470] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:12.218 [2024-11-18 14:16:04.114581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:12.218 [2024-11-18 14:16:04.114697] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:14:12.218 [2024-11-18 14:16:04.114834] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.218 [2024-11-18 14:16:04.114938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:12.218 [2024-11-18 14:16:04.115422] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:14:12.218 [2024-11-18 14:16:04.115577] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:14:12.218 [2024-11-18 14:16:04.115799] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.218 pt2 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.218 14:16:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.477 14:16:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.477 "name": "raid_bdev1", 00:14:12.477 "uuid": "8f3a675d-c99c-4b1b-975c-ab5ff60e1b31", 00:14:12.477 "strip_size_kb": 0, 00:14:12.477 "state": "online", 00:14:12.477 "raid_level": "raid1", 00:14:12.477 "superblock": true, 00:14:12.477 "num_base_bdevs": 2, 00:14:12.477 "num_base_bdevs_discovered": 1, 00:14:12.477 "num_base_bdevs_operational": 1, 00:14:12.477 "base_bdevs_list": [ 00:14:12.477 { 00:14:12.477 "name": null, 00:14:12.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.477 "is_configured": false, 00:14:12.477 "data_offset": 2048, 00:14:12.477 "data_size": 63488 00:14:12.477 }, 00:14:12.477 { 00:14:12.477 "name": "pt2", 00:14:12.477 "uuid": "531827f5-b706-5bb2-a0d2-9e720f5559a7", 00:14:12.477 "is_configured": true, 00:14:12.477 "data_offset": 2048, 00:14:12.477 "data_size": 63488 00:14:12.477 } 00:14:12.477 ] 00:14:12.477 }' 00:14:12.477 14:16:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.477 14:16:04 -- common/autotest_common.sh@10 -- # set +x 00:14:13.045 14:16:05 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:14:13.045 14:16:05 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:13.045 14:16:05 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:14:13.304 [2024-11-18 14:16:05.188086] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.304 14:16:05 -- bdev/bdev_raid.sh@506 -- # '[' 8f3a675d-c99c-4b1b-975c-ab5ff60e1b31 '!=' 8f3a675d-c99c-4b1b-975c-ab5ff60e1b31 ']' 00:14:13.304 14:16:05 -- bdev/bdev_raid.sh@511 -- # killprocess 124809 00:14:13.304 14:16:05 -- common/autotest_common.sh@936 -- # '[' -z 124809 ']' 00:14:13.304 14:16:05 -- common/autotest_common.sh@940 -- # kill -0 124809 00:14:13.304 14:16:05 -- common/autotest_common.sh@941 -- # uname 00:14:13.304 14:16:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:13.304 14:16:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124809 00:14:13.304 14:16:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:13.304 14:16:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:13.304 14:16:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124809' 00:14:13.304 killing process with pid 124809 00:14:13.304 14:16:05 -- common/autotest_common.sh@955 -- # kill 124809 00:14:13.304 [2024-11-18 14:16:05.225283] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.304 14:16:05 -- common/autotest_common.sh@960 -- # wait 124809 00:14:13.304 [2024-11-18 14:16:05.225480] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.304 [2024-11-18 14:16:05.225645] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.304 [2024-11-18 14:16:05.225767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:14:13.304 [2024-11-18 14:16:05.250709] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.563 14:16:05 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:13.563 00:14:13.563 real 0m10.261s 00:14:13.563 user 0m18.857s 00:14:13.563 sys 0m1.330s 00:14:13.563 14:16:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:13.563 14:16:05 -- common/autotest_common.sh@10 -- # set +x 00:14:13.563 ************************************ 00:14:13.563 END TEST raid_superblock_test 00:14:13.563 ************************************ 00:14:13.563 14:16:05 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:13.563 14:16:05 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:13.563 14:16:05 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:14:13.563 14:16:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:13.563 14:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:13.563 14:16:05 -- common/autotest_common.sh@10 -- # set +x 00:14:13.563 ************************************ 00:14:13.563 START TEST raid_state_function_test 00:14:13.563 ************************************ 00:14:13.563 14:16:05 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:14:13.563 14:16:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=125148 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125148' 00:14:13.564 Process raid pid: 125148 00:14:13.564 14:16:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125148 /var/tmp/spdk-raid.sock 00:14:13.564 14:16:05 -- common/autotest_common.sh@829 -- # '[' -z 125148 ']' 00:14:13.564 14:16:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:13.564 14:16:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:13.564 14:16:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:13.564 14:16:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.564 14:16:05 -- common/autotest_common.sh@10 -- # set +x 00:14:13.823 [2024-11-18 14:16:05.660469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:13.823 [2024-11-18 14:16:05.660644] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.823 [2024-11-18 14:16:05.794430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.823 [2024-11-18 14:16:05.862510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.082 [2024-11-18 14:16:05.932009] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.647 14:16:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.647 14:16:06 -- common/autotest_common.sh@862 -- # return 0 00:14:14.647 14:16:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:14.905 [2024-11-18 14:16:06.845953] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.905 [2024-11-18 14:16:06.846027] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.905 [2024-11-18 14:16:06.846040] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:14.905 [2024-11-18 14:16:06.846058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:14.905 [2024-11-18 14:16:06.846065] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:14.905 [2024-11-18 14:16:06.846105] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.905 14:16:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.164 14:16:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.164 "name": "Existed_Raid", 00:14:15.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.165 "strip_size_kb": 64, 00:14:15.165 "state": "configuring", 00:14:15.165 "raid_level": "raid0", 00:14:15.165 "superblock": false, 00:14:15.165 "num_base_bdevs": 3, 00:14:15.165 "num_base_bdevs_discovered": 0, 00:14:15.165 "num_base_bdevs_operational": 3, 00:14:15.165 "base_bdevs_list": [ 00:14:15.165 { 00:14:15.165 "name": "BaseBdev1", 00:14:15.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.165 "is_configured": false, 00:14:15.165 "data_offset": 0, 00:14:15.165 "data_size": 0 00:14:15.165 }, 00:14:15.165 { 00:14:15.165 "name": "BaseBdev2", 00:14:15.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.165 "is_configured": false, 00:14:15.165 "data_offset": 0, 00:14:15.165 "data_size": 0 00:14:15.165 }, 00:14:15.165 { 00:14:15.165 "name": "BaseBdev3", 00:14:15.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.165 "is_configured": false, 00:14:15.165 "data_offset": 0, 00:14:15.165 "data_size": 0 00:14:15.165 } 00:14:15.165 ] 00:14:15.165 }' 00:14:15.165 14:16:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.165 14:16:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.732 14:16:07 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:15.991 [2024-11-18 14:16:07.949982] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.991 [2024-11-18 14:16:07.950013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:15.991 14:16:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:16.250 [2024-11-18 14:16:08.146012] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.250 [2024-11-18 14:16:08.146059] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.250 [2024-11-18 14:16:08.146070] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.250 [2024-11-18 14:16:08.146091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.250 [2024-11-18 14:16:08.146098] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.250 [2024-11-18 14:16:08.146122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.250 14:16:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.509 [2024-11-18 14:16:08.351701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.509 BaseBdev1 00:14:16.509 14:16:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:16.509 14:16:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:16.509 14:16:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.509 14:16:08 -- common/autotest_common.sh@899 -- # local i 00:14:16.509 14:16:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.509 14:16:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.509 14:16:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.509 14:16:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:16.768 [ 00:14:16.768 { 00:14:16.768 "name": "BaseBdev1", 00:14:16.768 "aliases": [ 00:14:16.768 "59b554a6-7f1c-4ea6-a4a3-55b01614ab4c" 00:14:16.768 ], 00:14:16.768 "product_name": "Malloc disk", 00:14:16.768 "block_size": 512, 00:14:16.768 "num_blocks": 65536, 00:14:16.768 "uuid": "59b554a6-7f1c-4ea6-a4a3-55b01614ab4c", 00:14:16.768 "assigned_rate_limits": { 00:14:16.768 "rw_ios_per_sec": 0, 00:14:16.768 "rw_mbytes_per_sec": 0, 00:14:16.768 "r_mbytes_per_sec": 0, 00:14:16.769 "w_mbytes_per_sec": 0 00:14:16.769 }, 00:14:16.769 "claimed": true, 00:14:16.769 "claim_type": "exclusive_write", 00:14:16.769 "zoned": false, 00:14:16.769 "supported_io_types": { 00:14:16.769 "read": true, 00:14:16.769 "write": true, 00:14:16.769 "unmap": true, 00:14:16.769 "write_zeroes": true, 00:14:16.769 "flush": true, 00:14:16.769 "reset": true, 00:14:16.769 "compare": false, 00:14:16.769 "compare_and_write": false, 00:14:16.769 "abort": true, 00:14:16.769 "nvme_admin": false, 00:14:16.769 "nvme_io": false 00:14:16.769 }, 00:14:16.769 "memory_domains": [ 00:14:16.769 { 00:14:16.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.769 "dma_device_type": 2 00:14:16.769 } 00:14:16.769 ], 00:14:16.769 "driver_specific": {} 00:14:16.769 } 00:14:16.769 ] 00:14:16.769 14:16:08 -- common/autotest_common.sh@905 -- # return 0 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.769 14:16:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.027 14:16:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.027 "name": "Existed_Raid", 00:14:17.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.027 "strip_size_kb": 64, 00:14:17.027 "state": "configuring", 00:14:17.027 "raid_level": "raid0", 00:14:17.027 "superblock": false, 00:14:17.027 "num_base_bdevs": 3, 00:14:17.027 "num_base_bdevs_discovered": 1, 00:14:17.027 "num_base_bdevs_operational": 3, 00:14:17.027 "base_bdevs_list": [ 00:14:17.027 { 00:14:17.027 "name": "BaseBdev1", 00:14:17.027 "uuid": "59b554a6-7f1c-4ea6-a4a3-55b01614ab4c", 00:14:17.027 "is_configured": true, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 65536 00:14:17.027 }, 00:14:17.027 { 00:14:17.027 "name": "BaseBdev2", 00:14:17.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.027 "is_configured": false, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 0 00:14:17.027 }, 00:14:17.027 { 00:14:17.027 "name": "BaseBdev3", 00:14:17.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.027 "is_configured": false, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 0 00:14:17.027 } 00:14:17.027 ] 00:14:17.027 }' 00:14:17.027 14:16:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.027 14:16:08 -- common/autotest_common.sh@10 -- # set +x 00:14:17.594 14:16:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.594 [2024-11-18 14:16:09.659920] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.594 [2024-11-18 14:16:09.659964] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:17.851 14:16:09 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:17.851 14:16:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:17.851 [2024-11-18 14:16:09.844001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.852 [2024-11-18 14:16:09.845822] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:17.852 [2024-11-18 14:16:09.845872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:17.852 [2024-11-18 14:16:09.845883] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:17.852 [2024-11-18 14:16:09.845908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.852 14:16:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.110 14:16:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.110 "name": "Existed_Raid", 00:14:18.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.110 "strip_size_kb": 64, 00:14:18.110 "state": "configuring", 00:14:18.110 "raid_level": "raid0", 00:14:18.110 "superblock": false, 00:14:18.110 "num_base_bdevs": 3, 00:14:18.110 "num_base_bdevs_discovered": 1, 00:14:18.110 "num_base_bdevs_operational": 3, 00:14:18.110 "base_bdevs_list": [ 00:14:18.110 { 00:14:18.110 "name": "BaseBdev1", 00:14:18.110 "uuid": "59b554a6-7f1c-4ea6-a4a3-55b01614ab4c", 00:14:18.110 "is_configured": true, 00:14:18.110 "data_offset": 0, 00:14:18.110 "data_size": 65536 00:14:18.110 }, 00:14:18.110 { 00:14:18.110 "name": "BaseBdev2", 00:14:18.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.110 "is_configured": false, 00:14:18.110 "data_offset": 0, 00:14:18.110 "data_size": 0 00:14:18.110 }, 00:14:18.110 { 00:14:18.110 "name": "BaseBdev3", 00:14:18.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.110 "is_configured": false, 00:14:18.110 "data_offset": 0, 00:14:18.110 "data_size": 0 00:14:18.110 } 00:14:18.110 ] 00:14:18.110 }' 00:14:18.110 14:16:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.110 14:16:10 -- common/autotest_common.sh@10 -- # set +x 00:14:18.676 14:16:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:18.935 [2024-11-18 14:16:10.788594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.935 BaseBdev2 00:14:18.935 14:16:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:18.935 14:16:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:18.935 14:16:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:18.935 14:16:10 -- common/autotest_common.sh@899 -- # local i 00:14:18.935 14:16:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:18.935 14:16:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:18.935 14:16:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.935 14:16:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.194 [ 00:14:19.194 { 00:14:19.194 "name": "BaseBdev2", 00:14:19.194 "aliases": [ 00:14:19.194 "5c2bd8d2-9fd4-4b7d-96f2-0937d111654f" 00:14:19.194 ], 00:14:19.194 "product_name": "Malloc disk", 00:14:19.194 "block_size": 512, 00:14:19.194 "num_blocks": 65536, 00:14:19.194 "uuid": "5c2bd8d2-9fd4-4b7d-96f2-0937d111654f", 00:14:19.194 "assigned_rate_limits": { 00:14:19.194 "rw_ios_per_sec": 0, 00:14:19.194 "rw_mbytes_per_sec": 0, 00:14:19.194 "r_mbytes_per_sec": 0, 00:14:19.194 "w_mbytes_per_sec": 0 00:14:19.194 }, 00:14:19.194 "claimed": true, 00:14:19.194 "claim_type": "exclusive_write", 00:14:19.194 "zoned": false, 00:14:19.194 "supported_io_types": { 00:14:19.194 "read": true, 00:14:19.194 "write": true, 00:14:19.194 "unmap": true, 00:14:19.194 "write_zeroes": true, 00:14:19.194 "flush": true, 00:14:19.194 "reset": true, 00:14:19.194 "compare": false, 00:14:19.194 "compare_and_write": false, 00:14:19.194 "abort": true, 00:14:19.194 "nvme_admin": false, 00:14:19.194 "nvme_io": false 00:14:19.194 }, 00:14:19.194 "memory_domains": [ 00:14:19.194 { 00:14:19.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.194 "dma_device_type": 2 00:14:19.194 } 00:14:19.194 ], 00:14:19.194 "driver_specific": {} 00:14:19.194 } 00:14:19.194 ] 00:14:19.194 14:16:11 -- common/autotest_common.sh@905 -- # return 0 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.194 14:16:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.454 14:16:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.454 "name": "Existed_Raid", 00:14:19.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.454 "strip_size_kb": 64, 00:14:19.454 "state": "configuring", 00:14:19.454 "raid_level": "raid0", 00:14:19.454 "superblock": false, 00:14:19.454 "num_base_bdevs": 3, 00:14:19.454 "num_base_bdevs_discovered": 2, 00:14:19.454 "num_base_bdevs_operational": 3, 00:14:19.454 "base_bdevs_list": [ 00:14:19.454 { 00:14:19.454 "name": "BaseBdev1", 00:14:19.454 "uuid": "59b554a6-7f1c-4ea6-a4a3-55b01614ab4c", 00:14:19.454 "is_configured": true, 00:14:19.454 "data_offset": 0, 00:14:19.454 "data_size": 65536 00:14:19.454 }, 00:14:19.454 { 00:14:19.454 "name": "BaseBdev2", 00:14:19.454 "uuid": "5c2bd8d2-9fd4-4b7d-96f2-0937d111654f", 00:14:19.454 "is_configured": true, 00:14:19.454 "data_offset": 0, 00:14:19.454 "data_size": 65536 00:14:19.454 }, 00:14:19.454 { 00:14:19.454 "name": "BaseBdev3", 00:14:19.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.454 "is_configured": false, 00:14:19.454 "data_offset": 0, 00:14:19.454 "data_size": 0 00:14:19.454 } 00:14:19.454 ] 00:14:19.454 }' 00:14:19.454 14:16:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.454 14:16:11 -- common/autotest_common.sh@10 -- # set +x 00:14:20.021 14:16:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.279 [2024-11-18 14:16:12.236257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.279 [2024-11-18 14:16:12.236298] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:20.279 [2024-11-18 14:16:12.236308] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:20.279 [2024-11-18 14:16:12.236454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:20.279 [2024-11-18 14:16:12.236930] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:20.279 [2024-11-18 14:16:12.236952] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:20.279 [2024-11-18 14:16:12.237216] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.279 BaseBdev3 00:14:20.279 14:16:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:20.279 14:16:12 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:20.279 14:16:12 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.279 14:16:12 -- common/autotest_common.sh@899 -- # local i 00:14:20.279 14:16:12 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.279 14:16:12 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.279 14:16:12 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.538 14:16:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:20.797 [ 00:14:20.797 { 00:14:20.797 "name": "BaseBdev3", 00:14:20.797 "aliases": [ 00:14:20.797 "79c55f25-b9d9-4229-abd4-fb3459b1c60d" 00:14:20.797 ], 00:14:20.797 "product_name": "Malloc disk", 00:14:20.797 "block_size": 512, 00:14:20.797 "num_blocks": 65536, 00:14:20.797 "uuid": "79c55f25-b9d9-4229-abd4-fb3459b1c60d", 00:14:20.797 "assigned_rate_limits": { 00:14:20.797 "rw_ios_per_sec": 0, 00:14:20.797 "rw_mbytes_per_sec": 0, 00:14:20.797 "r_mbytes_per_sec": 0, 00:14:20.797 "w_mbytes_per_sec": 0 00:14:20.797 }, 00:14:20.797 "claimed": true, 00:14:20.797 "claim_type": "exclusive_write", 00:14:20.797 "zoned": false, 00:14:20.797 "supported_io_types": { 00:14:20.797 "read": true, 00:14:20.797 "write": true, 00:14:20.797 "unmap": true, 00:14:20.797 "write_zeroes": true, 00:14:20.797 "flush": true, 00:14:20.797 "reset": true, 00:14:20.797 "compare": false, 00:14:20.797 "compare_and_write": false, 00:14:20.797 "abort": true, 00:14:20.797 "nvme_admin": false, 00:14:20.797 "nvme_io": false 00:14:20.797 }, 00:14:20.797 "memory_domains": [ 00:14:20.797 { 00:14:20.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.797 "dma_device_type": 2 00:14:20.797 } 00:14:20.797 ], 00:14:20.797 "driver_specific": {} 00:14:20.797 } 00:14:20.797 ] 00:14:20.797 14:16:12 -- common/autotest_common.sh@905 -- # return 0 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.797 14:16:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.057 14:16:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.057 "name": "Existed_Raid", 00:14:21.057 "uuid": "12cb16fd-cc93-496a-b73d-d35cefeb9c9d", 00:14:21.057 "strip_size_kb": 64, 00:14:21.057 "state": "online", 00:14:21.057 "raid_level": "raid0", 00:14:21.057 "superblock": false, 00:14:21.057 "num_base_bdevs": 3, 00:14:21.057 "num_base_bdevs_discovered": 3, 00:14:21.057 "num_base_bdevs_operational": 3, 00:14:21.057 "base_bdevs_list": [ 00:14:21.057 { 00:14:21.057 "name": "BaseBdev1", 00:14:21.057 "uuid": "59b554a6-7f1c-4ea6-a4a3-55b01614ab4c", 00:14:21.057 "is_configured": true, 00:14:21.057 "data_offset": 0, 00:14:21.057 "data_size": 65536 00:14:21.057 }, 00:14:21.057 { 00:14:21.057 "name": "BaseBdev2", 00:14:21.057 "uuid": "5c2bd8d2-9fd4-4b7d-96f2-0937d111654f", 00:14:21.057 "is_configured": true, 00:14:21.057 "data_offset": 0, 00:14:21.057 "data_size": 65536 00:14:21.057 }, 00:14:21.057 { 00:14:21.057 "name": "BaseBdev3", 00:14:21.057 "uuid": "79c55f25-b9d9-4229-abd4-fb3459b1c60d", 00:14:21.057 "is_configured": true, 00:14:21.057 "data_offset": 0, 00:14:21.057 "data_size": 65536 00:14:21.057 } 00:14:21.057 ] 00:14:21.057 }' 00:14:21.057 14:16:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.057 14:16:13 -- common/autotest_common.sh@10 -- # set +x 00:14:21.625 14:16:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:21.883 [2024-11-18 14:16:13.851165] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.883 [2024-11-18 14:16:13.851190] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.883 [2024-11-18 14:16:13.851269] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.883 14:16:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.142 14:16:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.142 "name": "Existed_Raid", 00:14:22.142 "uuid": "12cb16fd-cc93-496a-b73d-d35cefeb9c9d", 00:14:22.142 "strip_size_kb": 64, 00:14:22.142 "state": "offline", 00:14:22.142 "raid_level": "raid0", 00:14:22.142 "superblock": false, 00:14:22.142 "num_base_bdevs": 3, 00:14:22.142 "num_base_bdevs_discovered": 2, 00:14:22.142 "num_base_bdevs_operational": 2, 00:14:22.142 "base_bdevs_list": [ 00:14:22.142 { 00:14:22.142 "name": null, 00:14:22.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.142 "is_configured": false, 00:14:22.142 "data_offset": 0, 00:14:22.142 "data_size": 65536 00:14:22.142 }, 00:14:22.142 { 00:14:22.142 "name": "BaseBdev2", 00:14:22.142 "uuid": "5c2bd8d2-9fd4-4b7d-96f2-0937d111654f", 00:14:22.142 "is_configured": true, 00:14:22.142 "data_offset": 0, 00:14:22.142 "data_size": 65536 00:14:22.142 }, 00:14:22.142 { 00:14:22.142 "name": "BaseBdev3", 00:14:22.142 "uuid": "79c55f25-b9d9-4229-abd4-fb3459b1c60d", 00:14:22.142 "is_configured": true, 00:14:22.142 "data_offset": 0, 00:14:22.142 "data_size": 65536 00:14:22.142 } 00:14:22.142 ] 00:14:22.142 }' 00:14:22.142 14:16:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.142 14:16:14 -- common/autotest_common.sh@10 -- # set +x 00:14:22.710 14:16:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:22.710 14:16:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:22.710 14:16:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.710 14:16:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:22.969 14:16:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:22.969 14:16:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.969 14:16:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:23.228 [2024-11-18 14:16:15.191858] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.228 14:16:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:23.228 14:16:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:23.228 14:16:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.228 14:16:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:23.486 14:16:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:23.486 14:16:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:23.486 14:16:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:23.745 [2024-11-18 14:16:15.636583] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.745 [2024-11-18 14:16:15.636632] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:23.745 14:16:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:23.745 14:16:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:23.745 14:16:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:23.745 14:16:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.004 14:16:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:24.004 14:16:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:24.004 14:16:15 -- bdev/bdev_raid.sh@287 -- # killprocess 125148 00:14:24.004 14:16:15 -- common/autotest_common.sh@936 -- # '[' -z 125148 ']' 00:14:24.004 14:16:15 -- common/autotest_common.sh@940 -- # kill -0 125148 00:14:24.004 14:16:15 -- common/autotest_common.sh@941 -- # uname 00:14:24.004 14:16:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:24.004 14:16:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125148 00:14:24.004 14:16:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:24.004 14:16:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:24.004 14:16:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125148' 00:14:24.004 killing process with pid 125148 00:14:24.004 14:16:15 -- common/autotest_common.sh@955 -- # kill 125148 00:14:24.004 14:16:15 -- common/autotest_common.sh@960 -- # wait 125148 00:14:24.004 [2024-11-18 14:16:15.937087] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.004 [2024-11-18 14:16:15.937178] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:24.263 00:14:24.263 real 0m10.610s 00:14:24.263 user 0m19.388s 00:14:24.263 sys 0m1.318s 00:14:24.263 14:16:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:24.263 14:16:16 -- common/autotest_common.sh@10 -- # set +x 00:14:24.263 ************************************ 00:14:24.263 END TEST raid_state_function_test 00:14:24.263 ************************************ 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:24.263 14:16:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:24.263 14:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:24.263 14:16:16 -- common/autotest_common.sh@10 -- # set +x 00:14:24.263 ************************************ 00:14:24.263 START TEST raid_state_function_test_sb 00:14:24.263 ************************************ 00:14:24.263 14:16:16 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=125511 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125511' 00:14:24.263 Process raid pid: 125511 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:24.263 14:16:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125511 /var/tmp/spdk-raid.sock 00:14:24.263 14:16:16 -- common/autotest_common.sh@829 -- # '[' -z 125511 ']' 00:14:24.263 14:16:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:24.263 14:16:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.263 14:16:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:24.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:24.263 14:16:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.263 14:16:16 -- common/autotest_common.sh@10 -- # set +x 00:14:24.522 [2024-11-18 14:16:16.341543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:24.522 [2024-11-18 14:16:16.341830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.522 [2024-11-18 14:16:16.488228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.522 [2024-11-18 14:16:16.554741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.780 [2024-11-18 14:16:16.624256] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.397 14:16:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.397 14:16:17 -- common/autotest_common.sh@862 -- # return 0 00:14:25.397 14:16:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:25.397 [2024-11-18 14:16:17.446259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.397 [2024-11-18 14:16:17.446342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.397 [2024-11-18 14:16:17.446356] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.397 [2024-11-18 14:16:17.446374] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.397 [2024-11-18 14:16:17.446381] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:25.397 [2024-11-18 14:16:17.446421] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.676 "name": "Existed_Raid", 00:14:25.676 "uuid": "5b2ee32d-a445-4939-b5dd-637322857ffb", 00:14:25.676 "strip_size_kb": 64, 00:14:25.676 "state": "configuring", 00:14:25.676 "raid_level": "raid0", 00:14:25.676 "superblock": true, 00:14:25.676 "num_base_bdevs": 3, 00:14:25.676 "num_base_bdevs_discovered": 0, 00:14:25.676 "num_base_bdevs_operational": 3, 00:14:25.676 "base_bdevs_list": [ 00:14:25.676 { 00:14:25.676 "name": "BaseBdev1", 00:14:25.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.676 "is_configured": false, 00:14:25.676 "data_offset": 0, 00:14:25.676 "data_size": 0 00:14:25.676 }, 00:14:25.676 { 00:14:25.676 "name": "BaseBdev2", 00:14:25.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.676 "is_configured": false, 00:14:25.676 "data_offset": 0, 00:14:25.676 "data_size": 0 00:14:25.676 }, 00:14:25.676 { 00:14:25.676 "name": "BaseBdev3", 00:14:25.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.676 "is_configured": false, 00:14:25.676 "data_offset": 0, 00:14:25.676 "data_size": 0 00:14:25.676 } 00:14:25.676 ] 00:14:25.676 }' 00:14:25.676 14:16:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.676 14:16:17 -- common/autotest_common.sh@10 -- # set +x 00:14:26.625 14:16:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.625 [2024-11-18 14:16:18.574255] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.625 [2024-11-18 14:16:18.574289] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:26.625 14:16:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:26.884 [2024-11-18 14:16:18.762324] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.884 [2024-11-18 14:16:18.762373] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.884 [2024-11-18 14:16:18.762384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.884 [2024-11-18 14:16:18.762404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.884 [2024-11-18 14:16:18.762411] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.884 [2024-11-18 14:16:18.762435] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.884 14:16:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.143 [2024-11-18 14:16:19.024133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.143 BaseBdev1 00:14:27.143 14:16:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:27.143 14:16:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:27.143 14:16:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:27.143 14:16:19 -- common/autotest_common.sh@899 -- # local i 00:14:27.143 14:16:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:27.143 14:16:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:27.143 14:16:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.143 14:16:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.401 [ 00:14:27.401 { 00:14:27.401 "name": "BaseBdev1", 00:14:27.401 "aliases": [ 00:14:27.401 "a7e8c3c2-41af-4a06-9e8b-c5541ad25661" 00:14:27.401 ], 00:14:27.401 "product_name": "Malloc disk", 00:14:27.401 "block_size": 512, 00:14:27.401 "num_blocks": 65536, 00:14:27.401 "uuid": "a7e8c3c2-41af-4a06-9e8b-c5541ad25661", 00:14:27.401 "assigned_rate_limits": { 00:14:27.401 "rw_ios_per_sec": 0, 00:14:27.401 "rw_mbytes_per_sec": 0, 00:14:27.401 "r_mbytes_per_sec": 0, 00:14:27.401 "w_mbytes_per_sec": 0 00:14:27.401 }, 00:14:27.401 "claimed": true, 00:14:27.401 "claim_type": "exclusive_write", 00:14:27.401 "zoned": false, 00:14:27.401 "supported_io_types": { 00:14:27.401 "read": true, 00:14:27.401 "write": true, 00:14:27.401 "unmap": true, 00:14:27.401 "write_zeroes": true, 00:14:27.401 "flush": true, 00:14:27.401 "reset": true, 00:14:27.401 "compare": false, 00:14:27.401 "compare_and_write": false, 00:14:27.401 "abort": true, 00:14:27.401 "nvme_admin": false, 00:14:27.401 "nvme_io": false 00:14:27.401 }, 00:14:27.401 "memory_domains": [ 00:14:27.401 { 00:14:27.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.401 "dma_device_type": 2 00:14:27.401 } 00:14:27.401 ], 00:14:27.401 "driver_specific": {} 00:14:27.401 } 00:14:27.401 ] 00:14:27.401 14:16:19 -- common/autotest_common.sh@905 -- # return 0 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.401 14:16:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.660 14:16:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.660 "name": "Existed_Raid", 00:14:27.660 "uuid": "024642b7-05a8-4904-bf10-7712bff6f5c8", 00:14:27.660 "strip_size_kb": 64, 00:14:27.660 "state": "configuring", 00:14:27.660 "raid_level": "raid0", 00:14:27.660 "superblock": true, 00:14:27.660 "num_base_bdevs": 3, 00:14:27.660 "num_base_bdevs_discovered": 1, 00:14:27.660 "num_base_bdevs_operational": 3, 00:14:27.660 "base_bdevs_list": [ 00:14:27.660 { 00:14:27.660 "name": "BaseBdev1", 00:14:27.660 "uuid": "a7e8c3c2-41af-4a06-9e8b-c5541ad25661", 00:14:27.660 "is_configured": true, 00:14:27.660 "data_offset": 2048, 00:14:27.660 "data_size": 63488 00:14:27.660 }, 00:14:27.660 { 00:14:27.660 "name": "BaseBdev2", 00:14:27.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.660 "is_configured": false, 00:14:27.660 "data_offset": 0, 00:14:27.660 "data_size": 0 00:14:27.660 }, 00:14:27.660 { 00:14:27.660 "name": "BaseBdev3", 00:14:27.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.660 "is_configured": false, 00:14:27.660 "data_offset": 0, 00:14:27.660 "data_size": 0 00:14:27.660 } 00:14:27.660 ] 00:14:27.660 }' 00:14:27.660 14:16:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.660 14:16:19 -- common/autotest_common.sh@10 -- # set +x 00:14:28.228 14:16:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.487 [2024-11-18 14:16:20.420372] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.487 [2024-11-18 14:16:20.420421] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:28.487 14:16:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:28.487 14:16:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:28.746 14:16:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:29.005 BaseBdev1 00:14:29.005 14:16:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:29.005 14:16:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:29.005 14:16:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.005 14:16:20 -- common/autotest_common.sh@899 -- # local i 00:14:29.005 14:16:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.005 14:16:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.005 14:16:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:29.005 14:16:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:29.264 [ 00:14:29.264 { 00:14:29.264 "name": "BaseBdev1", 00:14:29.264 "aliases": [ 00:14:29.264 "1bc17566-2f5c-468a-82c1-87be1ccee4fa" 00:14:29.264 ], 00:14:29.264 "product_name": "Malloc disk", 00:14:29.264 "block_size": 512, 00:14:29.264 "num_blocks": 65536, 00:14:29.264 "uuid": "1bc17566-2f5c-468a-82c1-87be1ccee4fa", 00:14:29.264 "assigned_rate_limits": { 00:14:29.264 "rw_ios_per_sec": 0, 00:14:29.264 "rw_mbytes_per_sec": 0, 00:14:29.264 "r_mbytes_per_sec": 0, 00:14:29.264 "w_mbytes_per_sec": 0 00:14:29.264 }, 00:14:29.264 "claimed": false, 00:14:29.264 "zoned": false, 00:14:29.264 "supported_io_types": { 00:14:29.264 "read": true, 00:14:29.264 "write": true, 00:14:29.264 "unmap": true, 00:14:29.264 "write_zeroes": true, 00:14:29.264 "flush": true, 00:14:29.264 "reset": true, 00:14:29.264 "compare": false, 00:14:29.264 "compare_and_write": false, 00:14:29.264 "abort": true, 00:14:29.264 "nvme_admin": false, 00:14:29.264 "nvme_io": false 00:14:29.264 }, 00:14:29.264 "memory_domains": [ 00:14:29.264 { 00:14:29.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.264 "dma_device_type": 2 00:14:29.264 } 00:14:29.264 ], 00:14:29.264 "driver_specific": {} 00:14:29.264 } 00:14:29.264 ] 00:14:29.264 14:16:21 -- common/autotest_common.sh@905 -- # return 0 00:14:29.264 14:16:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:29.523 [2024-11-18 14:16:21.469360] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.523 [2024-11-18 14:16:21.471357] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.523 [2024-11-18 14:16:21.471424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.523 [2024-11-18 14:16:21.471436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.523 [2024-11-18 14:16:21.471463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.523 14:16:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.782 14:16:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.782 "name": "Existed_Raid", 00:14:29.782 "uuid": "a999f9ef-0a1a-4fbb-b1c4-ac83fe31034c", 00:14:29.782 "strip_size_kb": 64, 00:14:29.782 "state": "configuring", 00:14:29.782 "raid_level": "raid0", 00:14:29.782 "superblock": true, 00:14:29.782 "num_base_bdevs": 3, 00:14:29.782 "num_base_bdevs_discovered": 1, 00:14:29.782 "num_base_bdevs_operational": 3, 00:14:29.782 "base_bdevs_list": [ 00:14:29.782 { 00:14:29.782 "name": "BaseBdev1", 00:14:29.782 "uuid": "1bc17566-2f5c-468a-82c1-87be1ccee4fa", 00:14:29.782 "is_configured": true, 00:14:29.782 "data_offset": 2048, 00:14:29.782 "data_size": 63488 00:14:29.782 }, 00:14:29.782 { 00:14:29.782 "name": "BaseBdev2", 00:14:29.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.782 "is_configured": false, 00:14:29.782 "data_offset": 0, 00:14:29.782 "data_size": 0 00:14:29.782 }, 00:14:29.782 { 00:14:29.782 "name": "BaseBdev3", 00:14:29.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.782 "is_configured": false, 00:14:29.782 "data_offset": 0, 00:14:29.782 "data_size": 0 00:14:29.782 } 00:14:29.782 ] 00:14:29.782 }' 00:14:29.782 14:16:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.782 14:16:21 -- common/autotest_common.sh@10 -- # set +x 00:14:30.350 14:16:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.609 [2024-11-18 14:16:22.500883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.609 BaseBdev2 00:14:30.609 14:16:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:30.609 14:16:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:30.609 14:16:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:30.609 14:16:22 -- common/autotest_common.sh@899 -- # local i 00:14:30.609 14:16:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:30.609 14:16:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:30.609 14:16:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:30.868 14:16:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.868 [ 00:14:30.868 { 00:14:30.868 "name": "BaseBdev2", 00:14:30.868 "aliases": [ 00:14:30.868 "0c83d33d-fa96-4c46-b2e8-32012a8bd6d3" 00:14:30.868 ], 00:14:30.868 "product_name": "Malloc disk", 00:14:30.868 "block_size": 512, 00:14:30.868 "num_blocks": 65536, 00:14:30.868 "uuid": "0c83d33d-fa96-4c46-b2e8-32012a8bd6d3", 00:14:30.868 "assigned_rate_limits": { 00:14:30.868 "rw_ios_per_sec": 0, 00:14:30.868 "rw_mbytes_per_sec": 0, 00:14:30.868 "r_mbytes_per_sec": 0, 00:14:30.868 "w_mbytes_per_sec": 0 00:14:30.868 }, 00:14:30.868 "claimed": true, 00:14:30.868 "claim_type": "exclusive_write", 00:14:30.868 "zoned": false, 00:14:30.868 "supported_io_types": { 00:14:30.868 "read": true, 00:14:30.868 "write": true, 00:14:30.868 "unmap": true, 00:14:30.868 "write_zeroes": true, 00:14:30.868 "flush": true, 00:14:30.868 "reset": true, 00:14:30.868 "compare": false, 00:14:30.868 "compare_and_write": false, 00:14:30.868 "abort": true, 00:14:30.868 "nvme_admin": false, 00:14:30.868 "nvme_io": false 00:14:30.868 }, 00:14:30.868 "memory_domains": [ 00:14:30.868 { 00:14:30.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.868 "dma_device_type": 2 00:14:30.868 } 00:14:30.868 ], 00:14:30.868 "driver_specific": {} 00:14:30.869 } 00:14:30.869 ] 00:14:30.869 14:16:22 -- common/autotest_common.sh@905 -- # return 0 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.869 14:16:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.128 14:16:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:31.128 "name": "Existed_Raid", 00:14:31.128 "uuid": "a999f9ef-0a1a-4fbb-b1c4-ac83fe31034c", 00:14:31.128 "strip_size_kb": 64, 00:14:31.128 "state": "configuring", 00:14:31.128 "raid_level": "raid0", 00:14:31.128 "superblock": true, 00:14:31.128 "num_base_bdevs": 3, 00:14:31.128 "num_base_bdevs_discovered": 2, 00:14:31.128 "num_base_bdevs_operational": 3, 00:14:31.128 "base_bdevs_list": [ 00:14:31.128 { 00:14:31.128 "name": "BaseBdev1", 00:14:31.128 "uuid": "1bc17566-2f5c-468a-82c1-87be1ccee4fa", 00:14:31.128 "is_configured": true, 00:14:31.128 "data_offset": 2048, 00:14:31.128 "data_size": 63488 00:14:31.128 }, 00:14:31.128 { 00:14:31.128 "name": "BaseBdev2", 00:14:31.128 "uuid": "0c83d33d-fa96-4c46-b2e8-32012a8bd6d3", 00:14:31.128 "is_configured": true, 00:14:31.128 "data_offset": 2048, 00:14:31.128 "data_size": 63488 00:14:31.128 }, 00:14:31.128 { 00:14:31.128 "name": "BaseBdev3", 00:14:31.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.128 "is_configured": false, 00:14:31.128 "data_offset": 0, 00:14:31.128 "data_size": 0 00:14:31.128 } 00:14:31.128 ] 00:14:31.128 }' 00:14:31.128 14:16:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:31.128 14:16:23 -- common/autotest_common.sh@10 -- # set +x 00:14:31.695 14:16:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.954 [2024-11-18 14:16:23.932472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.954 [2024-11-18 14:16:23.932663] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:14:31.954 [2024-11-18 14:16:23.932677] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:31.954 [2024-11-18 14:16:23.932790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:31.954 [2024-11-18 14:16:23.933189] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:14:31.954 [2024-11-18 14:16:23.933211] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:14:31.954 BaseBdev3 00:14:31.954 [2024-11-18 14:16:23.933372] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.954 14:16:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:31.954 14:16:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:31.954 14:16:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:31.954 14:16:23 -- common/autotest_common.sh@899 -- # local i 00:14:31.954 14:16:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:31.954 14:16:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:31.954 14:16:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:32.213 14:16:24 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:32.471 [ 00:14:32.471 { 00:14:32.471 "name": "BaseBdev3", 00:14:32.471 "aliases": [ 00:14:32.471 "212937f2-4bfb-484a-b08a-e9e926e55aa3" 00:14:32.471 ], 00:14:32.471 "product_name": "Malloc disk", 00:14:32.471 "block_size": 512, 00:14:32.471 "num_blocks": 65536, 00:14:32.471 "uuid": "212937f2-4bfb-484a-b08a-e9e926e55aa3", 00:14:32.471 "assigned_rate_limits": { 00:14:32.471 "rw_ios_per_sec": 0, 00:14:32.471 "rw_mbytes_per_sec": 0, 00:14:32.471 "r_mbytes_per_sec": 0, 00:14:32.471 "w_mbytes_per_sec": 0 00:14:32.471 }, 00:14:32.471 "claimed": true, 00:14:32.471 "claim_type": "exclusive_write", 00:14:32.471 "zoned": false, 00:14:32.471 "supported_io_types": { 00:14:32.471 "read": true, 00:14:32.471 "write": true, 00:14:32.471 "unmap": true, 00:14:32.471 "write_zeroes": true, 00:14:32.471 "flush": true, 00:14:32.471 "reset": true, 00:14:32.471 "compare": false, 00:14:32.471 "compare_and_write": false, 00:14:32.471 "abort": true, 00:14:32.471 "nvme_admin": false, 00:14:32.471 "nvme_io": false 00:14:32.471 }, 00:14:32.471 "memory_domains": [ 00:14:32.471 { 00:14:32.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.471 "dma_device_type": 2 00:14:32.471 } 00:14:32.471 ], 00:14:32.471 "driver_specific": {} 00:14:32.471 } 00:14:32.471 ] 00:14:32.471 14:16:24 -- common/autotest_common.sh@905 -- # return 0 00:14:32.471 14:16:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:32.471 14:16:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:32.471 14:16:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:32.471 14:16:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.471 14:16:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:32.471 14:16:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.472 14:16:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.731 14:16:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.731 "name": "Existed_Raid", 00:14:32.731 "uuid": "a999f9ef-0a1a-4fbb-b1c4-ac83fe31034c", 00:14:32.731 "strip_size_kb": 64, 00:14:32.731 "state": "online", 00:14:32.731 "raid_level": "raid0", 00:14:32.731 "superblock": true, 00:14:32.731 "num_base_bdevs": 3, 00:14:32.731 "num_base_bdevs_discovered": 3, 00:14:32.731 "num_base_bdevs_operational": 3, 00:14:32.731 "base_bdevs_list": [ 00:14:32.731 { 00:14:32.731 "name": "BaseBdev1", 00:14:32.731 "uuid": "1bc17566-2f5c-468a-82c1-87be1ccee4fa", 00:14:32.731 "is_configured": true, 00:14:32.731 "data_offset": 2048, 00:14:32.731 "data_size": 63488 00:14:32.731 }, 00:14:32.731 { 00:14:32.731 "name": "BaseBdev2", 00:14:32.731 "uuid": "0c83d33d-fa96-4c46-b2e8-32012a8bd6d3", 00:14:32.731 "is_configured": true, 00:14:32.731 "data_offset": 2048, 00:14:32.731 "data_size": 63488 00:14:32.731 }, 00:14:32.731 { 00:14:32.731 "name": "BaseBdev3", 00:14:32.731 "uuid": "212937f2-4bfb-484a-b08a-e9e926e55aa3", 00:14:32.731 "is_configured": true, 00:14:32.731 "data_offset": 2048, 00:14:32.731 "data_size": 63488 00:14:32.731 } 00:14:32.731 ] 00:14:32.731 }' 00:14:32.731 14:16:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.731 14:16:24 -- common/autotest_common.sh@10 -- # set +x 00:14:33.298 14:16:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:33.557 [2024-11-18 14:16:25.488844] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.557 [2024-11-18 14:16:25.488870] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.557 [2024-11-18 14:16:25.488933] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:33.557 14:16:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.558 14:16:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.816 14:16:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:33.816 "name": "Existed_Raid", 00:14:33.816 "uuid": "a999f9ef-0a1a-4fbb-b1c4-ac83fe31034c", 00:14:33.816 "strip_size_kb": 64, 00:14:33.816 "state": "offline", 00:14:33.816 "raid_level": "raid0", 00:14:33.816 "superblock": true, 00:14:33.816 "num_base_bdevs": 3, 00:14:33.816 "num_base_bdevs_discovered": 2, 00:14:33.816 "num_base_bdevs_operational": 2, 00:14:33.816 "base_bdevs_list": [ 00:14:33.816 { 00:14:33.816 "name": null, 00:14:33.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.816 "is_configured": false, 00:14:33.816 "data_offset": 2048, 00:14:33.816 "data_size": 63488 00:14:33.816 }, 00:14:33.816 { 00:14:33.816 "name": "BaseBdev2", 00:14:33.816 "uuid": "0c83d33d-fa96-4c46-b2e8-32012a8bd6d3", 00:14:33.816 "is_configured": true, 00:14:33.816 "data_offset": 2048, 00:14:33.816 "data_size": 63488 00:14:33.816 }, 00:14:33.816 { 00:14:33.816 "name": "BaseBdev3", 00:14:33.816 "uuid": "212937f2-4bfb-484a-b08a-e9e926e55aa3", 00:14:33.816 "is_configured": true, 00:14:33.816 "data_offset": 2048, 00:14:33.816 "data_size": 63488 00:14:33.816 } 00:14:33.816 ] 00:14:33.816 }' 00:14:33.816 14:16:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:33.816 14:16:25 -- common/autotest_common.sh@10 -- # set +x 00:14:34.384 14:16:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:34.384 14:16:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:34.384 14:16:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.384 14:16:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:34.642 [2024-11-18 14:16:26.656608] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:34.642 14:16:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.901 14:16:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:34.901 14:16:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.901 14:16:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:35.160 [2024-11-18 14:16:27.158612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:35.160 [2024-11-18 14:16:27.158666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:14:35.160 14:16:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:35.160 14:16:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:35.160 14:16:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.160 14:16:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:35.419 14:16:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:35.419 14:16:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:35.419 14:16:27 -- bdev/bdev_raid.sh@287 -- # killprocess 125511 00:14:35.419 14:16:27 -- common/autotest_common.sh@936 -- # '[' -z 125511 ']' 00:14:35.419 14:16:27 -- common/autotest_common.sh@940 -- # kill -0 125511 00:14:35.419 14:16:27 -- common/autotest_common.sh@941 -- # uname 00:14:35.419 14:16:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.419 14:16:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125511 00:14:35.419 killing process with pid 125511 00:14:35.419 14:16:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:35.419 14:16:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:35.419 14:16:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125511' 00:14:35.419 14:16:27 -- common/autotest_common.sh@955 -- # kill 125511 00:14:35.419 [2024-11-18 14:16:27.407671] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.419 14:16:27 -- common/autotest_common.sh@960 -- # wait 125511 00:14:35.419 [2024-11-18 14:16:27.407750] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:35.677 00:14:35.677 real 0m11.417s 00:14:35.677 user 0m20.972s 00:14:35.677 sys 0m1.381s 00:14:35.677 14:16:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:35.677 14:16:27 -- common/autotest_common.sh@10 -- # set +x 00:14:35.677 ************************************ 00:14:35.677 END TEST raid_state_function_test_sb 00:14:35.677 ************************************ 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:35.677 14:16:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:35.677 14:16:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.677 14:16:27 -- common/autotest_common.sh@10 -- # set +x 00:14:35.677 ************************************ 00:14:35.677 START TEST raid_superblock_test 00:14:35.677 ************************************ 00:14:35.677 14:16:27 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:35.677 14:16:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:35.935 14:16:27 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:35.935 14:16:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:35.935 14:16:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:35.935 14:16:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=125891 00:14:35.935 14:16:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125891 /var/tmp/spdk-raid.sock 00:14:35.935 14:16:27 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:35.935 14:16:27 -- common/autotest_common.sh@829 -- # '[' -z 125891 ']' 00:14:35.935 14:16:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:35.935 14:16:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.935 14:16:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:35.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:35.935 14:16:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.935 14:16:27 -- common/autotest_common.sh@10 -- # set +x 00:14:35.935 [2024-11-18 14:16:27.793536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:35.935 [2024-11-18 14:16:27.793725] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125891 ] 00:14:35.935 [2024-11-18 14:16:27.929018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.935 [2024-11-18 14:16:28.001417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.194 [2024-11-18 14:16:28.070799] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.762 14:16:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.762 14:16:28 -- common/autotest_common.sh@862 -- # return 0 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:36.762 14:16:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:37.020 malloc1 00:14:37.020 14:16:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:37.279 [2024-11-18 14:16:29.246466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:37.279 [2024-11-18 14:16:29.246574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.279 [2024-11-18 14:16:29.246617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:37.279 [2024-11-18 14:16:29.246662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.279 [2024-11-18 14:16:29.249095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.279 [2024-11-18 14:16:29.249156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:37.279 pt1 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.279 14:16:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:37.538 malloc2 00:14:37.538 14:16:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.796 [2024-11-18 14:16:29.744160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.796 [2024-11-18 14:16:29.744236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.796 [2024-11-18 14:16:29.744273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:37.796 [2024-11-18 14:16:29.744313] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.796 [2024-11-18 14:16:29.746160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.796 [2024-11-18 14:16:29.746207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.796 pt2 00:14:37.796 14:16:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:37.797 14:16:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:38.054 malloc3 00:14:38.054 14:16:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:38.312 [2024-11-18 14:16:30.225284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:38.312 [2024-11-18 14:16:30.225361] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.312 [2024-11-18 14:16:30.225405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:38.312 [2024-11-18 14:16:30.225449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.312 [2024-11-18 14:16:30.227665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.312 [2024-11-18 14:16:30.227719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:38.312 pt3 00:14:38.312 14:16:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:38.312 14:16:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:38.312 14:16:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:38.570 [2024-11-18 14:16:30.413418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:38.570 [2024-11-18 14:16:30.415439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.570 [2024-11-18 14:16:30.415507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:38.570 [2024-11-18 14:16:30.415693] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:38.570 [2024-11-18 14:16:30.415717] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.570 [2024-11-18 14:16:30.415861] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:14:38.570 [2024-11-18 14:16:30.416237] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:38.570 [2024-11-18 14:16:30.416258] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:14:38.570 [2024-11-18 14:16:30.416390] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.570 "name": "raid_bdev1", 00:14:38.570 "uuid": "850b0c18-3ec7-466d-a49f-7e849c116e18", 00:14:38.570 "strip_size_kb": 64, 00:14:38.570 "state": "online", 00:14:38.570 "raid_level": "raid0", 00:14:38.570 "superblock": true, 00:14:38.570 "num_base_bdevs": 3, 00:14:38.570 "num_base_bdevs_discovered": 3, 00:14:38.570 "num_base_bdevs_operational": 3, 00:14:38.570 "base_bdevs_list": [ 00:14:38.570 { 00:14:38.570 "name": "pt1", 00:14:38.570 "uuid": "e803c77b-23c9-5126-a3f2-59582b3e92b4", 00:14:38.570 "is_configured": true, 00:14:38.570 "data_offset": 2048, 00:14:38.570 "data_size": 63488 00:14:38.570 }, 00:14:38.570 { 00:14:38.570 "name": "pt2", 00:14:38.570 "uuid": "247dccd4-c1eb-5d2d-90c9-a8ecd0e7abb4", 00:14:38.570 "is_configured": true, 00:14:38.570 "data_offset": 2048, 00:14:38.570 "data_size": 63488 00:14:38.570 }, 00:14:38.570 { 00:14:38.570 "name": "pt3", 00:14:38.570 "uuid": "11ea45f6-b876-538c-803e-084a216d2028", 00:14:38.570 "is_configured": true, 00:14:38.570 "data_offset": 2048, 00:14:38.570 "data_size": 63488 00:14:38.570 } 00:14:38.570 ] 00:14:38.570 }' 00:14:38.570 14:16:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.570 14:16:30 -- common/autotest_common.sh@10 -- # set +x 00:14:39.503 14:16:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:39.503 14:16:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.503 [2024-11-18 14:16:31.481667] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.503 14:16:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=850b0c18-3ec7-466d-a49f-7e849c116e18 00:14:39.503 14:16:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 850b0c18-3ec7-466d-a49f-7e849c116e18 ']' 00:14:39.503 14:16:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:39.762 [2024-11-18 14:16:31.673500] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.762 [2024-11-18 14:16:31.673523] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.762 [2024-11-18 14:16:31.673588] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.762 [2024-11-18 14:16:31.673653] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.762 [2024-11-18 14:16:31.673665] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:14:39.762 14:16:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:39.762 14:16:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.021 14:16:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:40.021 14:16:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:40.021 14:16:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.021 14:16:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:40.279 14:16:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.279 14:16:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:40.279 14:16:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.279 14:16:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:40.538 14:16:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:40.538 14:16:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:40.796 14:16:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:40.796 14:16:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:40.796 14:16:32 -- common/autotest_common.sh@650 -- # local es=0 00:14:40.796 14:16:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:40.796 14:16:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.796 14:16:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.796 14:16:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.796 14:16:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.796 14:16:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.796 14:16:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.796 14:16:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:40.796 14:16:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:40.796 14:16:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:41.054 [2024-11-18 14:16:32.905679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.054 [2024-11-18 14:16:32.907722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.054 [2024-11-18 14:16:32.907889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:41.054 [2024-11-18 14:16:32.908035] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:41.054 [2024-11-18 14:16:32.908220] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:41.054 [2024-11-18 14:16:32.908356] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:14:41.054 [2024-11-18 14:16:32.908502] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.054 [2024-11-18 14:16:32.908596] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:14:41.054 request: 00:14:41.054 { 00:14:41.054 "name": "raid_bdev1", 00:14:41.054 "raid_level": "raid0", 00:14:41.054 "base_bdevs": [ 00:14:41.054 "malloc1", 00:14:41.054 "malloc2", 00:14:41.054 "malloc3" 00:14:41.054 ], 00:14:41.054 "superblock": false, 00:14:41.054 "strip_size_kb": 64, 00:14:41.054 "method": "bdev_raid_create", 00:14:41.054 "req_id": 1 00:14:41.054 } 00:14:41.054 Got JSON-RPC error response 00:14:41.054 response: 00:14:41.054 { 00:14:41.054 "code": -17, 00:14:41.054 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.055 } 00:14:41.055 14:16:32 -- common/autotest_common.sh@653 -- # es=1 00:14:41.055 14:16:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.055 14:16:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.055 14:16:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.055 14:16:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.055 14:16:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:41.313 14:16:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:41.313 14:16:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:41.313 14:16:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.572 [2024-11-18 14:16:33.409728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.572 [2024-11-18 14:16:33.409914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.572 [2024-11-18 14:16:33.409985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:41.572 [2024-11-18 14:16:33.410244] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.572 [2024-11-18 14:16:33.412470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.572 [2024-11-18 14:16:33.412641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.572 [2024-11-18 14:16:33.412833] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:41.572 [2024-11-18 14:16:33.413008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.572 pt1 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.572 14:16:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.831 14:16:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.831 "name": "raid_bdev1", 00:14:41.831 "uuid": "850b0c18-3ec7-466d-a49f-7e849c116e18", 00:14:41.831 "strip_size_kb": 64, 00:14:41.831 "state": "configuring", 00:14:41.831 "raid_level": "raid0", 00:14:41.831 "superblock": true, 00:14:41.831 "num_base_bdevs": 3, 00:14:41.831 "num_base_bdevs_discovered": 1, 00:14:41.831 "num_base_bdevs_operational": 3, 00:14:41.831 "base_bdevs_list": [ 00:14:41.831 { 00:14:41.831 "name": "pt1", 00:14:41.831 "uuid": "e803c77b-23c9-5126-a3f2-59582b3e92b4", 00:14:41.831 "is_configured": true, 00:14:41.831 "data_offset": 2048, 00:14:41.831 "data_size": 63488 00:14:41.831 }, 00:14:41.831 { 00:14:41.831 "name": null, 00:14:41.831 "uuid": "247dccd4-c1eb-5d2d-90c9-a8ecd0e7abb4", 00:14:41.831 "is_configured": false, 00:14:41.831 "data_offset": 2048, 00:14:41.831 "data_size": 63488 00:14:41.831 }, 00:14:41.831 { 00:14:41.831 "name": null, 00:14:41.831 "uuid": "11ea45f6-b876-538c-803e-084a216d2028", 00:14:41.831 "is_configured": false, 00:14:41.831 "data_offset": 2048, 00:14:41.831 "data_size": 63488 00:14:41.831 } 00:14:41.831 ] 00:14:41.831 }' 00:14:41.831 14:16:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.831 14:16:33 -- common/autotest_common.sh@10 -- # set +x 00:14:42.398 14:16:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:14:42.398 14:16:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.657 [2024-11-18 14:16:34.505942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.657 [2024-11-18 14:16:34.506137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.657 [2024-11-18 14:16:34.506214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:14:42.657 [2024-11-18 14:16:34.506498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.657 [2024-11-18 14:16:34.506959] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.657 [2024-11-18 14:16:34.507114] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.657 [2024-11-18 14:16:34.507331] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:42.657 [2024-11-18 14:16:34.507473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.657 pt2 00:14:42.657 14:16:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:42.916 [2024-11-18 14:16:34.750003] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.916 "name": "raid_bdev1", 00:14:42.916 "uuid": "850b0c18-3ec7-466d-a49f-7e849c116e18", 00:14:42.916 "strip_size_kb": 64, 00:14:42.916 "state": "configuring", 00:14:42.916 "raid_level": "raid0", 00:14:42.916 "superblock": true, 00:14:42.916 "num_base_bdevs": 3, 00:14:42.916 "num_base_bdevs_discovered": 1, 00:14:42.916 "num_base_bdevs_operational": 3, 00:14:42.916 "base_bdevs_list": [ 00:14:42.916 { 00:14:42.916 "name": "pt1", 00:14:42.916 "uuid": "e803c77b-23c9-5126-a3f2-59582b3e92b4", 00:14:42.916 "is_configured": true, 00:14:42.916 "data_offset": 2048, 00:14:42.916 "data_size": 63488 00:14:42.916 }, 00:14:42.916 { 00:14:42.916 "name": null, 00:14:42.916 "uuid": "247dccd4-c1eb-5d2d-90c9-a8ecd0e7abb4", 00:14:42.916 "is_configured": false, 00:14:42.916 "data_offset": 2048, 00:14:42.916 "data_size": 63488 00:14:42.916 }, 00:14:42.916 { 00:14:42.916 "name": null, 00:14:42.916 "uuid": "11ea45f6-b876-538c-803e-084a216d2028", 00:14:42.916 "is_configured": false, 00:14:42.916 "data_offset": 2048, 00:14:42.916 "data_size": 63488 00:14:42.916 } 00:14:42.916 ] 00:14:42.916 }' 00:14:42.916 14:16:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.916 14:16:34 -- common/autotest_common.sh@10 -- # set +x 00:14:43.852 14:16:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:43.852 14:16:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:43.852 14:16:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.852 [2024-11-18 14:16:35.846192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.852 [2024-11-18 14:16:35.846389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.852 [2024-11-18 14:16:35.846456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:43.852 [2024-11-18 14:16:35.846759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.852 [2024-11-18 14:16:35.847122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.852 [2024-11-18 14:16:35.847308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.852 [2024-11-18 14:16:35.847488] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:43.852 [2024-11-18 14:16:35.847647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.852 pt2 00:14:43.852 14:16:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:43.852 14:16:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:43.852 14:16:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.110 [2024-11-18 14:16:36.090244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.110 [2024-11-18 14:16:36.090425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.110 [2024-11-18 14:16:36.090486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:44.110 [2024-11-18 14:16:36.090627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.110 [2024-11-18 14:16:36.090986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.110 [2024-11-18 14:16:36.091148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.111 [2024-11-18 14:16:36.091359] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:44.111 [2024-11-18 14:16:36.091489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.111 [2024-11-18 14:16:36.091696] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:14:44.111 [2024-11-18 14:16:36.091805] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:44.111 [2024-11-18 14:16:36.091911] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:44.111 [2024-11-18 14:16:36.092242] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:14:44.111 [2024-11-18 14:16:36.092285] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:14:44.111 [2024-11-18 14:16:36.092562] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.111 pt3 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.111 14:16:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.369 14:16:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.369 "name": "raid_bdev1", 00:14:44.369 "uuid": "850b0c18-3ec7-466d-a49f-7e849c116e18", 00:14:44.369 "strip_size_kb": 64, 00:14:44.369 "state": "online", 00:14:44.369 "raid_level": "raid0", 00:14:44.369 "superblock": true, 00:14:44.369 "num_base_bdevs": 3, 00:14:44.369 "num_base_bdevs_discovered": 3, 00:14:44.369 "num_base_bdevs_operational": 3, 00:14:44.369 "base_bdevs_list": [ 00:14:44.369 { 00:14:44.369 "name": "pt1", 00:14:44.369 "uuid": "e803c77b-23c9-5126-a3f2-59582b3e92b4", 00:14:44.369 "is_configured": true, 00:14:44.370 "data_offset": 2048, 00:14:44.370 "data_size": 63488 00:14:44.370 }, 00:14:44.370 { 00:14:44.370 "name": "pt2", 00:14:44.370 "uuid": "247dccd4-c1eb-5d2d-90c9-a8ecd0e7abb4", 00:14:44.370 "is_configured": true, 00:14:44.370 "data_offset": 2048, 00:14:44.370 "data_size": 63488 00:14:44.370 }, 00:14:44.370 { 00:14:44.370 "name": "pt3", 00:14:44.370 "uuid": "11ea45f6-b876-538c-803e-084a216d2028", 00:14:44.370 "is_configured": true, 00:14:44.370 "data_offset": 2048, 00:14:44.370 "data_size": 63488 00:14:44.370 } 00:14:44.370 ] 00:14:44.370 }' 00:14:44.370 14:16:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.370 14:16:36 -- common/autotest_common.sh@10 -- # set +x 00:14:44.937 14:16:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:44.937 14:16:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:45.195 [2024-11-18 14:16:37.110574] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.196 14:16:37 -- bdev/bdev_raid.sh@430 -- # '[' 850b0c18-3ec7-466d-a49f-7e849c116e18 '!=' 850b0c18-3ec7-466d-a49f-7e849c116e18 ']' 00:14:45.196 14:16:37 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:45.196 14:16:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:45.196 14:16:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:45.196 14:16:37 -- bdev/bdev_raid.sh@511 -- # killprocess 125891 00:14:45.196 14:16:37 -- common/autotest_common.sh@936 -- # '[' -z 125891 ']' 00:14:45.196 14:16:37 -- common/autotest_common.sh@940 -- # kill -0 125891 00:14:45.196 14:16:37 -- common/autotest_common.sh@941 -- # uname 00:14:45.196 14:16:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.196 14:16:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125891 00:14:45.196 killing process with pid 125891 00:14:45.196 14:16:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.196 14:16:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.196 14:16:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125891' 00:14:45.196 14:16:37 -- common/autotest_common.sh@955 -- # kill 125891 00:14:45.196 [2024-11-18 14:16:37.149396] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.196 14:16:37 -- common/autotest_common.sh@960 -- # wait 125891 00:14:45.196 [2024-11-18 14:16:37.149444] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.196 [2024-11-18 14:16:37.149486] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.196 [2024-11-18 14:16:37.149495] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:14:45.196 [2024-11-18 14:16:37.185983] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.454 ************************************ 00:14:45.454 END TEST raid_superblock_test 00:14:45.454 ************************************ 00:14:45.454 14:16:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:45.454 00:14:45.454 real 0m9.722s 00:14:45.454 user 0m17.489s 00:14:45.454 sys 0m1.386s 00:14:45.454 14:16:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:45.454 14:16:37 -- common/autotest_common.sh@10 -- # set +x 00:14:45.454 14:16:37 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:45.455 14:16:37 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:45.455 14:16:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:45.455 14:16:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.455 14:16:37 -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 ************************************ 00:14:45.713 START TEST raid_state_function_test 00:14:45.713 ************************************ 00:14:45.713 14:16:37 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:14:45.713 14:16:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:45.713 14:16:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:45.713 14:16:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:45.713 14:16:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:45.713 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=126189 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126189' 00:14:45.714 Process raid pid: 126189 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:45.714 14:16:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126189 /var/tmp/spdk-raid.sock 00:14:45.714 14:16:37 -- common/autotest_common.sh@829 -- # '[' -z 126189 ']' 00:14:45.714 14:16:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:45.714 14:16:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.714 14:16:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:45.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:45.714 14:16:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.714 14:16:37 -- common/autotest_common.sh@10 -- # set +x 00:14:45.714 [2024-11-18 14:16:37.585967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:45.714 [2024-11-18 14:16:37.586354] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.714 [2024-11-18 14:16:37.724506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.972 [2024-11-18 14:16:37.797560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.972 [2024-11-18 14:16:37.867484] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.539 14:16:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.539 14:16:38 -- common/autotest_common.sh@862 -- # return 0 00:14:46.539 14:16:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:46.797 [2024-11-18 14:16:38.717866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.798 [2024-11-18 14:16:38.718196] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.798 [2024-11-18 14:16:38.718311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.798 [2024-11-18 14:16:38.718372] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.798 [2024-11-18 14:16:38.718462] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:46.798 [2024-11-18 14:16:38.718544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.798 14:16:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.056 14:16:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.057 "name": "Existed_Raid", 00:14:47.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.057 "strip_size_kb": 64, 00:14:47.057 "state": "configuring", 00:14:47.057 "raid_level": "concat", 00:14:47.057 "superblock": false, 00:14:47.057 "num_base_bdevs": 3, 00:14:47.057 "num_base_bdevs_discovered": 0, 00:14:47.057 "num_base_bdevs_operational": 3, 00:14:47.057 "base_bdevs_list": [ 00:14:47.057 { 00:14:47.057 "name": "BaseBdev1", 00:14:47.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.057 "is_configured": false, 00:14:47.057 "data_offset": 0, 00:14:47.057 "data_size": 0 00:14:47.057 }, 00:14:47.057 { 00:14:47.057 "name": "BaseBdev2", 00:14:47.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.057 "is_configured": false, 00:14:47.057 "data_offset": 0, 00:14:47.057 "data_size": 0 00:14:47.057 }, 00:14:47.057 { 00:14:47.057 "name": "BaseBdev3", 00:14:47.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.057 "is_configured": false, 00:14:47.057 "data_offset": 0, 00:14:47.057 "data_size": 0 00:14:47.057 } 00:14:47.057 ] 00:14:47.057 }' 00:14:47.057 14:16:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.057 14:16:38 -- common/autotest_common.sh@10 -- # set +x 00:14:47.641 14:16:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.900 [2024-11-18 14:16:39.773876] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.900 [2024-11-18 14:16:39.774036] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:47.900 14:16:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:48.159 [2024-11-18 14:16:40.021948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.159 [2024-11-18 14:16:40.022126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.159 [2024-11-18 14:16:40.022226] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.159 [2024-11-18 14:16:40.022389] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.159 [2024-11-18 14:16:40.022489] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.159 [2024-11-18 14:16:40.022553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.159 14:16:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.159 [2024-11-18 14:16:40.219678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.159 BaseBdev1 00:14:48.159 14:16:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:48.159 14:16:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:48.159 14:16:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.159 14:16:40 -- common/autotest_common.sh@899 -- # local i 00:14:48.159 14:16:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.159 14:16:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.159 14:16:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.418 14:16:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.677 [ 00:14:48.677 { 00:14:48.677 "name": "BaseBdev1", 00:14:48.677 "aliases": [ 00:14:48.677 "0e9ca441-864f-4d45-a4e9-f896d93259a5" 00:14:48.677 ], 00:14:48.677 "product_name": "Malloc disk", 00:14:48.677 "block_size": 512, 00:14:48.677 "num_blocks": 65536, 00:14:48.677 "uuid": "0e9ca441-864f-4d45-a4e9-f896d93259a5", 00:14:48.677 "assigned_rate_limits": { 00:14:48.677 "rw_ios_per_sec": 0, 00:14:48.677 "rw_mbytes_per_sec": 0, 00:14:48.677 "r_mbytes_per_sec": 0, 00:14:48.677 "w_mbytes_per_sec": 0 00:14:48.677 }, 00:14:48.677 "claimed": true, 00:14:48.677 "claim_type": "exclusive_write", 00:14:48.677 "zoned": false, 00:14:48.677 "supported_io_types": { 00:14:48.677 "read": true, 00:14:48.677 "write": true, 00:14:48.677 "unmap": true, 00:14:48.677 "write_zeroes": true, 00:14:48.677 "flush": true, 00:14:48.677 "reset": true, 00:14:48.677 "compare": false, 00:14:48.677 "compare_and_write": false, 00:14:48.677 "abort": true, 00:14:48.677 "nvme_admin": false, 00:14:48.677 "nvme_io": false 00:14:48.677 }, 00:14:48.677 "memory_domains": [ 00:14:48.677 { 00:14:48.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.677 "dma_device_type": 2 00:14:48.677 } 00:14:48.677 ], 00:14:48.677 "driver_specific": {} 00:14:48.677 } 00:14:48.677 ] 00:14:48.677 14:16:40 -- common/autotest_common.sh@905 -- # return 0 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.677 14:16:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.678 14:16:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.678 14:16:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.678 14:16:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.936 14:16:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.936 "name": "Existed_Raid", 00:14:48.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.936 "strip_size_kb": 64, 00:14:48.936 "state": "configuring", 00:14:48.936 "raid_level": "concat", 00:14:48.936 "superblock": false, 00:14:48.936 "num_base_bdevs": 3, 00:14:48.936 "num_base_bdevs_discovered": 1, 00:14:48.936 "num_base_bdevs_operational": 3, 00:14:48.936 "base_bdevs_list": [ 00:14:48.936 { 00:14:48.936 "name": "BaseBdev1", 00:14:48.936 "uuid": "0e9ca441-864f-4d45-a4e9-f896d93259a5", 00:14:48.936 "is_configured": true, 00:14:48.936 "data_offset": 0, 00:14:48.936 "data_size": 65536 00:14:48.936 }, 00:14:48.936 { 00:14:48.936 "name": "BaseBdev2", 00:14:48.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.936 "is_configured": false, 00:14:48.936 "data_offset": 0, 00:14:48.936 "data_size": 0 00:14:48.936 }, 00:14:48.936 { 00:14:48.936 "name": "BaseBdev3", 00:14:48.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.936 "is_configured": false, 00:14:48.936 "data_offset": 0, 00:14:48.936 "data_size": 0 00:14:48.936 } 00:14:48.936 ] 00:14:48.936 }' 00:14:48.936 14:16:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.936 14:16:40 -- common/autotest_common.sh@10 -- # set +x 00:14:49.505 14:16:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:49.764 [2024-11-18 14:16:41.731927] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.764 [2024-11-18 14:16:41.732088] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:49.764 14:16:41 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:49.764 14:16:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:50.023 [2024-11-18 14:16:41.912022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.023 [2024-11-18 14:16:41.913926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.024 [2024-11-18 14:16:41.914099] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.024 [2024-11-18 14:16:41.914197] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.024 [2024-11-18 14:16:41.914264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.024 14:16:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.282 14:16:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.282 "name": "Existed_Raid", 00:14:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.282 "strip_size_kb": 64, 00:14:50.282 "state": "configuring", 00:14:50.282 "raid_level": "concat", 00:14:50.282 "superblock": false, 00:14:50.282 "num_base_bdevs": 3, 00:14:50.282 "num_base_bdevs_discovered": 1, 00:14:50.283 "num_base_bdevs_operational": 3, 00:14:50.283 "base_bdevs_list": [ 00:14:50.283 { 00:14:50.283 "name": "BaseBdev1", 00:14:50.283 "uuid": "0e9ca441-864f-4d45-a4e9-f896d93259a5", 00:14:50.283 "is_configured": true, 00:14:50.283 "data_offset": 0, 00:14:50.283 "data_size": 65536 00:14:50.283 }, 00:14:50.283 { 00:14:50.283 "name": "BaseBdev2", 00:14:50.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.283 "is_configured": false, 00:14:50.283 "data_offset": 0, 00:14:50.283 "data_size": 0 00:14:50.283 }, 00:14:50.283 { 00:14:50.283 "name": "BaseBdev3", 00:14:50.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.283 "is_configured": false, 00:14:50.283 "data_offset": 0, 00:14:50.283 "data_size": 0 00:14:50.283 } 00:14:50.283 ] 00:14:50.283 }' 00:14:50.283 14:16:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.283 14:16:42 -- common/autotest_common.sh@10 -- # set +x 00:14:50.891 14:16:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.185 [2024-11-18 14:16:43.023706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.185 BaseBdev2 00:14:51.185 14:16:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:51.185 14:16:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:51.185 14:16:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:51.185 14:16:43 -- common/autotest_common.sh@899 -- # local i 00:14:51.185 14:16:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:51.185 14:16:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:51.185 14:16:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:51.456 14:16:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.456 [ 00:14:51.456 { 00:14:51.456 "name": "BaseBdev2", 00:14:51.456 "aliases": [ 00:14:51.456 "d4b12a1c-3290-476d-8bcb-5ec5d99be108" 00:14:51.456 ], 00:14:51.456 "product_name": "Malloc disk", 00:14:51.456 "block_size": 512, 00:14:51.456 "num_blocks": 65536, 00:14:51.456 "uuid": "d4b12a1c-3290-476d-8bcb-5ec5d99be108", 00:14:51.456 "assigned_rate_limits": { 00:14:51.456 "rw_ios_per_sec": 0, 00:14:51.456 "rw_mbytes_per_sec": 0, 00:14:51.456 "r_mbytes_per_sec": 0, 00:14:51.456 "w_mbytes_per_sec": 0 00:14:51.456 }, 00:14:51.456 "claimed": true, 00:14:51.456 "claim_type": "exclusive_write", 00:14:51.456 "zoned": false, 00:14:51.456 "supported_io_types": { 00:14:51.456 "read": true, 00:14:51.456 "write": true, 00:14:51.456 "unmap": true, 00:14:51.456 "write_zeroes": true, 00:14:51.456 "flush": true, 00:14:51.456 "reset": true, 00:14:51.456 "compare": false, 00:14:51.456 "compare_and_write": false, 00:14:51.456 "abort": true, 00:14:51.456 "nvme_admin": false, 00:14:51.456 "nvme_io": false 00:14:51.456 }, 00:14:51.456 "memory_domains": [ 00:14:51.456 { 00:14:51.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.456 "dma_device_type": 2 00:14:51.456 } 00:14:51.456 ], 00:14:51.456 "driver_specific": {} 00:14:51.456 } 00:14:51.456 ] 00:14:51.456 14:16:43 -- common/autotest_common.sh@905 -- # return 0 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.456 14:16:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.715 14:16:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.715 "name": "Existed_Raid", 00:14:51.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.715 "strip_size_kb": 64, 00:14:51.715 "state": "configuring", 00:14:51.715 "raid_level": "concat", 00:14:51.715 "superblock": false, 00:14:51.715 "num_base_bdevs": 3, 00:14:51.715 "num_base_bdevs_discovered": 2, 00:14:51.715 "num_base_bdevs_operational": 3, 00:14:51.715 "base_bdevs_list": [ 00:14:51.715 { 00:14:51.715 "name": "BaseBdev1", 00:14:51.715 "uuid": "0e9ca441-864f-4d45-a4e9-f896d93259a5", 00:14:51.715 "is_configured": true, 00:14:51.715 "data_offset": 0, 00:14:51.715 "data_size": 65536 00:14:51.715 }, 00:14:51.715 { 00:14:51.715 "name": "BaseBdev2", 00:14:51.715 "uuid": "d4b12a1c-3290-476d-8bcb-5ec5d99be108", 00:14:51.715 "is_configured": true, 00:14:51.715 "data_offset": 0, 00:14:51.715 "data_size": 65536 00:14:51.715 }, 00:14:51.715 { 00:14:51.715 "name": "BaseBdev3", 00:14:51.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.715 "is_configured": false, 00:14:51.715 "data_offset": 0, 00:14:51.715 "data_size": 0 00:14:51.715 } 00:14:51.715 ] 00:14:51.715 }' 00:14:51.715 14:16:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.715 14:16:43 -- common/autotest_common.sh@10 -- # set +x 00:14:52.283 14:16:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.542 [2024-11-18 14:16:44.431535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.542 [2024-11-18 14:16:44.431709] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:52.542 [2024-11-18 14:16:44.431753] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:52.542 [2024-11-18 14:16:44.432005] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:52.542 [2024-11-18 14:16:44.432553] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:52.542 [2024-11-18 14:16:44.432684] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:52.542 [2024-11-18 14:16:44.433000] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.542 BaseBdev3 00:14:52.542 14:16:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:52.542 14:16:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:52.542 14:16:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.542 14:16:44 -- common/autotest_common.sh@899 -- # local i 00:14:52.542 14:16:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.542 14:16:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.542 14:16:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.801 14:16:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.801 [ 00:14:52.801 { 00:14:52.801 "name": "BaseBdev3", 00:14:52.801 "aliases": [ 00:14:52.801 "42ca971b-2c73-4c36-82ba-17b6d6004acf" 00:14:52.801 ], 00:14:52.801 "product_name": "Malloc disk", 00:14:52.801 "block_size": 512, 00:14:52.801 "num_blocks": 65536, 00:14:52.801 "uuid": "42ca971b-2c73-4c36-82ba-17b6d6004acf", 00:14:52.801 "assigned_rate_limits": { 00:14:52.801 "rw_ios_per_sec": 0, 00:14:52.801 "rw_mbytes_per_sec": 0, 00:14:52.801 "r_mbytes_per_sec": 0, 00:14:52.801 "w_mbytes_per_sec": 0 00:14:52.801 }, 00:14:52.801 "claimed": true, 00:14:52.801 "claim_type": "exclusive_write", 00:14:52.801 "zoned": false, 00:14:52.801 "supported_io_types": { 00:14:52.801 "read": true, 00:14:52.801 "write": true, 00:14:52.801 "unmap": true, 00:14:52.801 "write_zeroes": true, 00:14:52.801 "flush": true, 00:14:52.801 "reset": true, 00:14:52.801 "compare": false, 00:14:52.801 "compare_and_write": false, 00:14:52.801 "abort": true, 00:14:52.801 "nvme_admin": false, 00:14:52.801 "nvme_io": false 00:14:52.801 }, 00:14:52.801 "memory_domains": [ 00:14:52.801 { 00:14:52.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.801 "dma_device_type": 2 00:14:52.801 } 00:14:52.801 ], 00:14:52.801 "driver_specific": {} 00:14:52.801 } 00:14:52.801 ] 00:14:52.801 14:16:44 -- common/autotest_common.sh@905 -- # return 0 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:52.801 14:16:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.061 14:16:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.061 14:16:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.061 14:16:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.061 14:16:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.061 14:16:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.061 14:16:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.061 "name": "Existed_Raid", 00:14:53.061 "uuid": "f45cf028-70bf-4212-8716-864d862e5a4c", 00:14:53.061 "strip_size_kb": 64, 00:14:53.061 "state": "online", 00:14:53.061 "raid_level": "concat", 00:14:53.061 "superblock": false, 00:14:53.061 "num_base_bdevs": 3, 00:14:53.061 "num_base_bdevs_discovered": 3, 00:14:53.061 "num_base_bdevs_operational": 3, 00:14:53.061 "base_bdevs_list": [ 00:14:53.061 { 00:14:53.061 "name": "BaseBdev1", 00:14:53.061 "uuid": "0e9ca441-864f-4d45-a4e9-f896d93259a5", 00:14:53.061 "is_configured": true, 00:14:53.061 "data_offset": 0, 00:14:53.061 "data_size": 65536 00:14:53.061 }, 00:14:53.061 { 00:14:53.061 "name": "BaseBdev2", 00:14:53.061 "uuid": "d4b12a1c-3290-476d-8bcb-5ec5d99be108", 00:14:53.061 "is_configured": true, 00:14:53.061 "data_offset": 0, 00:14:53.061 "data_size": 65536 00:14:53.061 }, 00:14:53.061 { 00:14:53.061 "name": "BaseBdev3", 00:14:53.061 "uuid": "42ca971b-2c73-4c36-82ba-17b6d6004acf", 00:14:53.061 "is_configured": true, 00:14:53.061 "data_offset": 0, 00:14:53.061 "data_size": 65536 00:14:53.061 } 00:14:53.061 ] 00:14:53.061 }' 00:14:53.061 14:16:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.061 14:16:45 -- common/autotest_common.sh@10 -- # set +x 00:14:53.628 14:16:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:53.887 [2024-11-18 14:16:45.844687] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.887 [2024-11-18 14:16:45.844837] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.887 [2024-11-18 14:16:45.845026] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.887 14:16:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.146 14:16:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.146 "name": "Existed_Raid", 00:14:54.146 "uuid": "f45cf028-70bf-4212-8716-864d862e5a4c", 00:14:54.146 "strip_size_kb": 64, 00:14:54.146 "state": "offline", 00:14:54.146 "raid_level": "concat", 00:14:54.146 "superblock": false, 00:14:54.146 "num_base_bdevs": 3, 00:14:54.146 "num_base_bdevs_discovered": 2, 00:14:54.146 "num_base_bdevs_operational": 2, 00:14:54.146 "base_bdevs_list": [ 00:14:54.146 { 00:14:54.146 "name": null, 00:14:54.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.146 "is_configured": false, 00:14:54.146 "data_offset": 0, 00:14:54.146 "data_size": 65536 00:14:54.146 }, 00:14:54.146 { 00:14:54.146 "name": "BaseBdev2", 00:14:54.146 "uuid": "d4b12a1c-3290-476d-8bcb-5ec5d99be108", 00:14:54.146 "is_configured": true, 00:14:54.146 "data_offset": 0, 00:14:54.146 "data_size": 65536 00:14:54.146 }, 00:14:54.146 { 00:14:54.146 "name": "BaseBdev3", 00:14:54.146 "uuid": "42ca971b-2c73-4c36-82ba-17b6d6004acf", 00:14:54.146 "is_configured": true, 00:14:54.146 "data_offset": 0, 00:14:54.146 "data_size": 65536 00:14:54.146 } 00:14:54.146 ] 00:14:54.146 }' 00:14:54.146 14:16:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.146 14:16:46 -- common/autotest_common.sh@10 -- # set +x 00:14:54.714 14:16:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:54.714 14:16:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:54.714 14:16:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.714 14:16:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:54.973 14:16:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:54.973 14:16:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:54.973 14:16:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:55.232 [2024-11-18 14:16:47.179933] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.232 14:16:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:55.232 14:16:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:55.232 14:16:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.232 14:16:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:55.491 14:16:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:55.491 14:16:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.491 14:16:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:55.750 [2024-11-18 14:16:47.609854] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:55.750 [2024-11-18 14:16:47.610090] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:55.750 14:16:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:55.750 14:16:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:55.750 14:16:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.750 14:16:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.009 14:16:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:56.009 14:16:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:56.009 14:16:47 -- bdev/bdev_raid.sh@287 -- # killprocess 126189 00:14:56.009 14:16:47 -- common/autotest_common.sh@936 -- # '[' -z 126189 ']' 00:14:56.009 14:16:47 -- common/autotest_common.sh@940 -- # kill -0 126189 00:14:56.009 14:16:47 -- common/autotest_common.sh@941 -- # uname 00:14:56.009 14:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.009 14:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126189 00:14:56.009 14:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.009 14:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.009 14:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126189' 00:14:56.009 killing process with pid 126189 00:14:56.009 14:16:47 -- common/autotest_common.sh@955 -- # kill 126189 00:14:56.009 14:16:47 -- common/autotest_common.sh@960 -- # wait 126189 00:14:56.009 [2024-11-18 14:16:47.901306] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.009 [2024-11-18 14:16:47.901390] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:56.268 00:14:56.268 real 0m10.586s 00:14:56.268 user 0m19.472s 00:14:56.268 sys 0m1.292s 00:14:56.268 14:16:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:56.268 14:16:48 -- common/autotest_common.sh@10 -- # set +x 00:14:56.268 ************************************ 00:14:56.268 END TEST raid_state_function_test 00:14:56.268 ************************************ 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:56.268 14:16:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:56.268 14:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.268 14:16:48 -- common/autotest_common.sh@10 -- # set +x 00:14:56.268 ************************************ 00:14:56.268 START TEST raid_state_function_test_sb 00:14:56.268 ************************************ 00:14:56.268 14:16:48 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=126554 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126554' 00:14:56.268 Process raid pid: 126554 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:56.268 14:16:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126554 /var/tmp/spdk-raid.sock 00:14:56.268 14:16:48 -- common/autotest_common.sh@829 -- # '[' -z 126554 ']' 00:14:56.268 14:16:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.268 14:16:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.268 14:16:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.268 14:16:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.268 14:16:48 -- common/autotest_common.sh@10 -- # set +x 00:14:56.268 [2024-11-18 14:16:48.245245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:56.268 [2024-11-18 14:16:48.245765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.528 [2024-11-18 14:16:48.386859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.528 [2024-11-18 14:16:48.463531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.528 [2024-11-18 14:16:48.534020] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.095 14:16:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.095 14:16:49 -- common/autotest_common.sh@862 -- # return 0 00:14:57.095 14:16:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:57.353 [2024-11-18 14:16:49.300971] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.353 [2024-11-18 14:16:49.301234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.353 [2024-11-18 14:16:49.301367] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.353 [2024-11-18 14:16:49.301437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.353 [2024-11-18 14:16:49.301678] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.353 [2024-11-18 14:16:49.301784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.353 14:16:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.612 14:16:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.612 "name": "Existed_Raid", 00:14:57.612 "uuid": "41aea17c-09fa-436f-afc0-7195a66af705", 00:14:57.612 "strip_size_kb": 64, 00:14:57.612 "state": "configuring", 00:14:57.612 "raid_level": "concat", 00:14:57.612 "superblock": true, 00:14:57.613 "num_base_bdevs": 3, 00:14:57.613 "num_base_bdevs_discovered": 0, 00:14:57.613 "num_base_bdevs_operational": 3, 00:14:57.613 "base_bdevs_list": [ 00:14:57.613 { 00:14:57.613 "name": "BaseBdev1", 00:14:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.613 "is_configured": false, 00:14:57.613 "data_offset": 0, 00:14:57.613 "data_size": 0 00:14:57.613 }, 00:14:57.613 { 00:14:57.613 "name": "BaseBdev2", 00:14:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.613 "is_configured": false, 00:14:57.613 "data_offset": 0, 00:14:57.613 "data_size": 0 00:14:57.613 }, 00:14:57.613 { 00:14:57.613 "name": "BaseBdev3", 00:14:57.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.613 "is_configured": false, 00:14:57.613 "data_offset": 0, 00:14:57.613 "data_size": 0 00:14:57.613 } 00:14:57.613 ] 00:14:57.613 }' 00:14:57.613 14:16:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.613 14:16:49 -- common/autotest_common.sh@10 -- # set +x 00:14:58.179 14:16:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.438 [2024-11-18 14:16:50.312970] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.438 [2024-11-18 14:16:50.313149] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:58.438 14:16:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:58.438 [2024-11-18 14:16:50.501033] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.438 [2024-11-18 14:16:50.501228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.438 [2024-11-18 14:16:50.501346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.438 [2024-11-18 14:16:50.501418] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.438 [2024-11-18 14:16:50.501522] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.438 [2024-11-18 14:16:50.501597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.697 14:16:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.697 [2024-11-18 14:16:50.746938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.697 BaseBdev1 00:14:58.697 14:16:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:58.697 14:16:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:58.697 14:16:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.697 14:16:50 -- common/autotest_common.sh@899 -- # local i 00:14:58.697 14:16:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.697 14:16:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.697 14:16:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.956 14:16:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.215 [ 00:14:59.215 { 00:14:59.215 "name": "BaseBdev1", 00:14:59.215 "aliases": [ 00:14:59.215 "34dd9a9e-7293-4c08-beed-43cb0de4297b" 00:14:59.215 ], 00:14:59.215 "product_name": "Malloc disk", 00:14:59.215 "block_size": 512, 00:14:59.215 "num_blocks": 65536, 00:14:59.215 "uuid": "34dd9a9e-7293-4c08-beed-43cb0de4297b", 00:14:59.215 "assigned_rate_limits": { 00:14:59.215 "rw_ios_per_sec": 0, 00:14:59.215 "rw_mbytes_per_sec": 0, 00:14:59.215 "r_mbytes_per_sec": 0, 00:14:59.215 "w_mbytes_per_sec": 0 00:14:59.215 }, 00:14:59.215 "claimed": true, 00:14:59.215 "claim_type": "exclusive_write", 00:14:59.215 "zoned": false, 00:14:59.215 "supported_io_types": { 00:14:59.215 "read": true, 00:14:59.215 "write": true, 00:14:59.215 "unmap": true, 00:14:59.215 "write_zeroes": true, 00:14:59.215 "flush": true, 00:14:59.215 "reset": true, 00:14:59.215 "compare": false, 00:14:59.215 "compare_and_write": false, 00:14:59.215 "abort": true, 00:14:59.215 "nvme_admin": false, 00:14:59.215 "nvme_io": false 00:14:59.215 }, 00:14:59.215 "memory_domains": [ 00:14:59.215 { 00:14:59.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.215 "dma_device_type": 2 00:14:59.215 } 00:14:59.215 ], 00:14:59.215 "driver_specific": {} 00:14:59.215 } 00:14:59.216 ] 00:14:59.216 14:16:51 -- common/autotest_common.sh@905 -- # return 0 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.216 14:16:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.474 14:16:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:59.474 "name": "Existed_Raid", 00:14:59.474 "uuid": "e9dd23a4-5d0e-43e0-9b74-feb7265eb599", 00:14:59.474 "strip_size_kb": 64, 00:14:59.474 "state": "configuring", 00:14:59.474 "raid_level": "concat", 00:14:59.474 "superblock": true, 00:14:59.474 "num_base_bdevs": 3, 00:14:59.474 "num_base_bdevs_discovered": 1, 00:14:59.474 "num_base_bdevs_operational": 3, 00:14:59.474 "base_bdevs_list": [ 00:14:59.474 { 00:14:59.474 "name": "BaseBdev1", 00:14:59.474 "uuid": "34dd9a9e-7293-4c08-beed-43cb0de4297b", 00:14:59.474 "is_configured": true, 00:14:59.474 "data_offset": 2048, 00:14:59.474 "data_size": 63488 00:14:59.474 }, 00:14:59.474 { 00:14:59.474 "name": "BaseBdev2", 00:14:59.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.474 "is_configured": false, 00:14:59.474 "data_offset": 0, 00:14:59.474 "data_size": 0 00:14:59.474 }, 00:14:59.474 { 00:14:59.474 "name": "BaseBdev3", 00:14:59.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.474 "is_configured": false, 00:14:59.474 "data_offset": 0, 00:14:59.474 "data_size": 0 00:14:59.474 } 00:14:59.474 ] 00:14:59.474 }' 00:14:59.474 14:16:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:59.474 14:16:51 -- common/autotest_common.sh@10 -- # set +x 00:15:00.041 14:16:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.041 [2024-11-18 14:16:52.059158] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.041 [2024-11-18 14:16:52.059385] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:00.041 14:16:52 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:00.041 14:16:52 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:00.299 14:16:52 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.558 BaseBdev1 00:15:00.558 14:16:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:00.558 14:16:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:00.558 14:16:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:00.558 14:16:52 -- common/autotest_common.sh@899 -- # local i 00:15:00.558 14:16:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:00.558 14:16:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:00.558 14:16:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.817 14:16:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.817 [ 00:15:00.817 { 00:15:00.817 "name": "BaseBdev1", 00:15:00.817 "aliases": [ 00:15:00.817 "3b2fefc9-b094-44fd-8ef6-0f038702f0ac" 00:15:00.817 ], 00:15:00.817 "product_name": "Malloc disk", 00:15:00.817 "block_size": 512, 00:15:00.817 "num_blocks": 65536, 00:15:00.817 "uuid": "3b2fefc9-b094-44fd-8ef6-0f038702f0ac", 00:15:00.817 "assigned_rate_limits": { 00:15:00.817 "rw_ios_per_sec": 0, 00:15:00.817 "rw_mbytes_per_sec": 0, 00:15:00.817 "r_mbytes_per_sec": 0, 00:15:00.817 "w_mbytes_per_sec": 0 00:15:00.817 }, 00:15:00.817 "claimed": false, 00:15:00.817 "zoned": false, 00:15:00.817 "supported_io_types": { 00:15:00.817 "read": true, 00:15:00.817 "write": true, 00:15:00.817 "unmap": true, 00:15:00.817 "write_zeroes": true, 00:15:00.817 "flush": true, 00:15:00.817 "reset": true, 00:15:00.817 "compare": false, 00:15:00.817 "compare_and_write": false, 00:15:00.817 "abort": true, 00:15:00.817 "nvme_admin": false, 00:15:00.817 "nvme_io": false 00:15:00.817 }, 00:15:00.817 "memory_domains": [ 00:15:00.817 { 00:15:00.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.817 "dma_device_type": 2 00:15:00.817 } 00:15:00.817 ], 00:15:00.817 "driver_specific": {} 00:15:00.817 } 00:15:00.817 ] 00:15:00.817 14:16:52 -- common/autotest_common.sh@905 -- # return 0 00:15:00.817 14:16:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:01.090 [2024-11-18 14:16:53.054289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.090 [2024-11-18 14:16:53.056494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.090 [2024-11-18 14:16:53.056684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.090 [2024-11-18 14:16:53.056836] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.090 [2024-11-18 14:16:53.056915] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.090 14:16:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.348 14:16:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.348 "name": "Existed_Raid", 00:15:01.348 "uuid": "ac68d37e-4eaf-4768-b136-7ea618d3ca02", 00:15:01.348 "strip_size_kb": 64, 00:15:01.348 "state": "configuring", 00:15:01.348 "raid_level": "concat", 00:15:01.348 "superblock": true, 00:15:01.348 "num_base_bdevs": 3, 00:15:01.348 "num_base_bdevs_discovered": 1, 00:15:01.348 "num_base_bdevs_operational": 3, 00:15:01.348 "base_bdevs_list": [ 00:15:01.348 { 00:15:01.348 "name": "BaseBdev1", 00:15:01.348 "uuid": "3b2fefc9-b094-44fd-8ef6-0f038702f0ac", 00:15:01.348 "is_configured": true, 00:15:01.348 "data_offset": 2048, 00:15:01.348 "data_size": 63488 00:15:01.348 }, 00:15:01.348 { 00:15:01.348 "name": "BaseBdev2", 00:15:01.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.348 "is_configured": false, 00:15:01.348 "data_offset": 0, 00:15:01.348 "data_size": 0 00:15:01.348 }, 00:15:01.348 { 00:15:01.348 "name": "BaseBdev3", 00:15:01.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.348 "is_configured": false, 00:15:01.348 "data_offset": 0, 00:15:01.348 "data_size": 0 00:15:01.348 } 00:15:01.348 ] 00:15:01.348 }' 00:15:01.348 14:16:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.348 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:15:01.916 14:16:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.175 [2024-11-18 14:16:54.044833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.175 BaseBdev2 00:15:02.175 14:16:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:02.175 14:16:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:02.175 14:16:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.175 14:16:54 -- common/autotest_common.sh@899 -- # local i 00:15:02.175 14:16:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.175 14:16:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.175 14:16:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.175 14:16:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.434 [ 00:15:02.435 { 00:15:02.435 "name": "BaseBdev2", 00:15:02.435 "aliases": [ 00:15:02.435 "6d7d2d4b-9f01-49ed-a2cd-595235fbf754" 00:15:02.435 ], 00:15:02.435 "product_name": "Malloc disk", 00:15:02.435 "block_size": 512, 00:15:02.435 "num_blocks": 65536, 00:15:02.435 "uuid": "6d7d2d4b-9f01-49ed-a2cd-595235fbf754", 00:15:02.435 "assigned_rate_limits": { 00:15:02.435 "rw_ios_per_sec": 0, 00:15:02.435 "rw_mbytes_per_sec": 0, 00:15:02.435 "r_mbytes_per_sec": 0, 00:15:02.435 "w_mbytes_per_sec": 0 00:15:02.435 }, 00:15:02.435 "claimed": true, 00:15:02.435 "claim_type": "exclusive_write", 00:15:02.435 "zoned": false, 00:15:02.435 "supported_io_types": { 00:15:02.435 "read": true, 00:15:02.435 "write": true, 00:15:02.435 "unmap": true, 00:15:02.435 "write_zeroes": true, 00:15:02.435 "flush": true, 00:15:02.435 "reset": true, 00:15:02.435 "compare": false, 00:15:02.435 "compare_and_write": false, 00:15:02.435 "abort": true, 00:15:02.435 "nvme_admin": false, 00:15:02.435 "nvme_io": false 00:15:02.435 }, 00:15:02.435 "memory_domains": [ 00:15:02.435 { 00:15:02.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.435 "dma_device_type": 2 00:15:02.435 } 00:15:02.435 ], 00:15:02.435 "driver_specific": {} 00:15:02.435 } 00:15:02.435 ] 00:15:02.435 14:16:54 -- common/autotest_common.sh@905 -- # return 0 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.435 14:16:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.694 14:16:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.694 "name": "Existed_Raid", 00:15:02.694 "uuid": "ac68d37e-4eaf-4768-b136-7ea618d3ca02", 00:15:02.694 "strip_size_kb": 64, 00:15:02.694 "state": "configuring", 00:15:02.694 "raid_level": "concat", 00:15:02.694 "superblock": true, 00:15:02.694 "num_base_bdevs": 3, 00:15:02.694 "num_base_bdevs_discovered": 2, 00:15:02.694 "num_base_bdevs_operational": 3, 00:15:02.694 "base_bdevs_list": [ 00:15:02.694 { 00:15:02.694 "name": "BaseBdev1", 00:15:02.694 "uuid": "3b2fefc9-b094-44fd-8ef6-0f038702f0ac", 00:15:02.694 "is_configured": true, 00:15:02.694 "data_offset": 2048, 00:15:02.694 "data_size": 63488 00:15:02.694 }, 00:15:02.694 { 00:15:02.694 "name": "BaseBdev2", 00:15:02.694 "uuid": "6d7d2d4b-9f01-49ed-a2cd-595235fbf754", 00:15:02.694 "is_configured": true, 00:15:02.694 "data_offset": 2048, 00:15:02.694 "data_size": 63488 00:15:02.694 }, 00:15:02.694 { 00:15:02.694 "name": "BaseBdev3", 00:15:02.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.694 "is_configured": false, 00:15:02.694 "data_offset": 0, 00:15:02.694 "data_size": 0 00:15:02.694 } 00:15:02.694 ] 00:15:02.694 }' 00:15:02.694 14:16:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.694 14:16:54 -- common/autotest_common.sh@10 -- # set +x 00:15:03.261 14:16:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.520 [2024-11-18 14:16:55.496746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.520 [2024-11-18 14:16:55.497155] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:03.520 [2024-11-18 14:16:55.497289] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:03.520 [2024-11-18 14:16:55.497469] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:03.520 BaseBdev3 00:15:03.520 [2024-11-18 14:16:55.497993] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:03.520 [2024-11-18 14:16:55.498014] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:03.520 [2024-11-18 14:16:55.498213] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.520 14:16:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:03.520 14:16:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:03.520 14:16:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.520 14:16:55 -- common/autotest_common.sh@899 -- # local i 00:15:03.520 14:16:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.520 14:16:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.520 14:16:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.779 14:16:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:04.038 [ 00:15:04.038 { 00:15:04.039 "name": "BaseBdev3", 00:15:04.039 "aliases": [ 00:15:04.039 "08a73f01-1223-48ec-9018-727195271030" 00:15:04.039 ], 00:15:04.039 "product_name": "Malloc disk", 00:15:04.039 "block_size": 512, 00:15:04.039 "num_blocks": 65536, 00:15:04.039 "uuid": "08a73f01-1223-48ec-9018-727195271030", 00:15:04.039 "assigned_rate_limits": { 00:15:04.039 "rw_ios_per_sec": 0, 00:15:04.039 "rw_mbytes_per_sec": 0, 00:15:04.039 "r_mbytes_per_sec": 0, 00:15:04.039 "w_mbytes_per_sec": 0 00:15:04.039 }, 00:15:04.039 "claimed": true, 00:15:04.039 "claim_type": "exclusive_write", 00:15:04.039 "zoned": false, 00:15:04.039 "supported_io_types": { 00:15:04.039 "read": true, 00:15:04.039 "write": true, 00:15:04.039 "unmap": true, 00:15:04.039 "write_zeroes": true, 00:15:04.039 "flush": true, 00:15:04.039 "reset": true, 00:15:04.039 "compare": false, 00:15:04.039 "compare_and_write": false, 00:15:04.039 "abort": true, 00:15:04.039 "nvme_admin": false, 00:15:04.039 "nvme_io": false 00:15:04.039 }, 00:15:04.039 "memory_domains": [ 00:15:04.039 { 00:15:04.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.039 "dma_device_type": 2 00:15:04.039 } 00:15:04.039 ], 00:15:04.039 "driver_specific": {} 00:15:04.039 } 00:15:04.039 ] 00:15:04.039 14:16:55 -- common/autotest_common.sh@905 -- # return 0 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.039 14:16:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:04.039 14:16:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.039 14:16:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.039 14:16:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.039 14:16:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.039 14:16:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.039 14:16:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.298 14:16:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.298 "name": "Existed_Raid", 00:15:04.298 "uuid": "ac68d37e-4eaf-4768-b136-7ea618d3ca02", 00:15:04.298 "strip_size_kb": 64, 00:15:04.298 "state": "online", 00:15:04.298 "raid_level": "concat", 00:15:04.298 "superblock": true, 00:15:04.298 "num_base_bdevs": 3, 00:15:04.298 "num_base_bdevs_discovered": 3, 00:15:04.298 "num_base_bdevs_operational": 3, 00:15:04.298 "base_bdevs_list": [ 00:15:04.298 { 00:15:04.298 "name": "BaseBdev1", 00:15:04.298 "uuid": "3b2fefc9-b094-44fd-8ef6-0f038702f0ac", 00:15:04.298 "is_configured": true, 00:15:04.298 "data_offset": 2048, 00:15:04.298 "data_size": 63488 00:15:04.298 }, 00:15:04.298 { 00:15:04.298 "name": "BaseBdev2", 00:15:04.298 "uuid": "6d7d2d4b-9f01-49ed-a2cd-595235fbf754", 00:15:04.298 "is_configured": true, 00:15:04.298 "data_offset": 2048, 00:15:04.298 "data_size": 63488 00:15:04.298 }, 00:15:04.298 { 00:15:04.298 "name": "BaseBdev3", 00:15:04.298 "uuid": "08a73f01-1223-48ec-9018-727195271030", 00:15:04.298 "is_configured": true, 00:15:04.298 "data_offset": 2048, 00:15:04.298 "data_size": 63488 00:15:04.298 } 00:15:04.298 ] 00:15:04.298 }' 00:15:04.298 14:16:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.298 14:16:56 -- common/autotest_common.sh@10 -- # set +x 00:15:04.865 14:16:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:05.124 [2024-11-18 14:16:57.045113] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.124 [2024-11-18 14:16:57.045280] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.124 [2024-11-18 14:16:57.045476] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.124 14:16:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.383 14:16:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.383 "name": "Existed_Raid", 00:15:05.383 "uuid": "ac68d37e-4eaf-4768-b136-7ea618d3ca02", 00:15:05.383 "strip_size_kb": 64, 00:15:05.383 "state": "offline", 00:15:05.383 "raid_level": "concat", 00:15:05.383 "superblock": true, 00:15:05.383 "num_base_bdevs": 3, 00:15:05.383 "num_base_bdevs_discovered": 2, 00:15:05.383 "num_base_bdevs_operational": 2, 00:15:05.383 "base_bdevs_list": [ 00:15:05.383 { 00:15:05.383 "name": null, 00:15:05.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.383 "is_configured": false, 00:15:05.383 "data_offset": 2048, 00:15:05.383 "data_size": 63488 00:15:05.383 }, 00:15:05.383 { 00:15:05.383 "name": "BaseBdev2", 00:15:05.383 "uuid": "6d7d2d4b-9f01-49ed-a2cd-595235fbf754", 00:15:05.383 "is_configured": true, 00:15:05.383 "data_offset": 2048, 00:15:05.383 "data_size": 63488 00:15:05.383 }, 00:15:05.383 { 00:15:05.383 "name": "BaseBdev3", 00:15:05.383 "uuid": "08a73f01-1223-48ec-9018-727195271030", 00:15:05.383 "is_configured": true, 00:15:05.383 "data_offset": 2048, 00:15:05.383 "data_size": 63488 00:15:05.383 } 00:15:05.383 ] 00:15:05.383 }' 00:15:05.383 14:16:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.383 14:16:57 -- common/autotest_common.sh@10 -- # set +x 00:15:05.951 14:16:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:05.951 14:16:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:05.951 14:16:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.951 14:16:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:06.210 14:16:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:06.210 14:16:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.210 14:16:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:06.469 [2024-11-18 14:16:58.362180] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.469 14:16:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:06.469 14:16:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:06.469 14:16:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.469 14:16:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:06.728 14:16:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:06.728 14:16:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.728 14:16:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:06.987 [2024-11-18 14:16:58.812042] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:06.987 [2024-11-18 14:16:58.812261] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:06.987 14:16:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:06.987 14:16:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:06.987 14:16:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.987 14:16:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:06.987 14:16:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:06.987 14:16:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:06.987 14:16:59 -- bdev/bdev_raid.sh@287 -- # killprocess 126554 00:15:06.987 14:16:59 -- common/autotest_common.sh@936 -- # '[' -z 126554 ']' 00:15:06.987 14:16:59 -- common/autotest_common.sh@940 -- # kill -0 126554 00:15:06.987 14:16:59 -- common/autotest_common.sh@941 -- # uname 00:15:06.987 14:16:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.987 14:16:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126554 00:15:06.987 14:16:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.987 14:16:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.987 14:16:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126554' 00:15:06.987 killing process with pid 126554 00:15:06.987 14:16:59 -- common/autotest_common.sh@955 -- # kill 126554 00:15:06.987 14:16:59 -- common/autotest_common.sh@960 -- # wait 126554 00:15:06.987 [2024-11-18 14:16:59.059369] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.987 [2024-11-18 14:16:59.059496] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.555 ************************************ 00:15:07.555 END TEST raid_state_function_test_sb 00:15:07.555 ************************************ 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:07.555 00:15:07.555 real 0m11.177s 00:15:07.555 user 0m20.419s 00:15:07.555 sys 0m1.418s 00:15:07.555 14:16:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:07.555 14:16:59 -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:07.555 14:16:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:07.555 14:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.555 14:16:59 -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 ************************************ 00:15:07.555 START TEST raid_superblock_test 00:15:07.555 ************************************ 00:15:07.555 14:16:59 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=126927 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126927 /var/tmp/spdk-raid.sock 00:15:07.555 14:16:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:07.555 14:16:59 -- common/autotest_common.sh@829 -- # '[' -z 126927 ']' 00:15:07.555 14:16:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:07.555 14:16:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.555 14:16:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:07.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:07.555 14:16:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.555 14:16:59 -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 [2024-11-18 14:16:59.474823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:07.555 [2024-11-18 14:16:59.475200] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126927 ] 00:15:07.555 [2024-11-18 14:16:59.609686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.814 [2024-11-18 14:16:59.681799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.814 [2024-11-18 14:16:59.752349] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.379 14:17:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.379 14:17:00 -- common/autotest_common.sh@862 -- # return 0 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.379 14:17:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:08.637 malloc1 00:15:08.637 14:17:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.895 [2024-11-18 14:17:00.916800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.895 [2024-11-18 14:17:00.917055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.895 [2024-11-18 14:17:00.917245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:08.895 [2024-11-18 14:17:00.917427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.895 [2024-11-18 14:17:00.920032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.895 [2024-11-18 14:17:00.920230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.895 pt1 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.896 14:17:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:09.154 malloc2 00:15:09.154 14:17:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.413 [2024-11-18 14:17:01.302687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.413 [2024-11-18 14:17:01.302889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.413 [2024-11-18 14:17:01.302975] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:09.413 [2024-11-18 14:17:01.303274] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.413 [2024-11-18 14:17:01.305673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.414 [2024-11-18 14:17:01.305855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.414 pt2 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:09.414 14:17:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:09.672 malloc3 00:15:09.672 14:17:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.672 [2024-11-18 14:17:01.688487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.672 [2024-11-18 14:17:01.688684] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.672 [2024-11-18 14:17:01.688771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:09.672 [2024-11-18 14:17:01.688924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.672 [2024-11-18 14:17:01.691271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.672 [2024-11-18 14:17:01.691452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.672 pt3 00:15:09.672 14:17:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:09.672 14:17:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:09.672 14:17:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:09.931 [2024-11-18 14:17:01.864637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.931 [2024-11-18 14:17:01.866890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.931 [2024-11-18 14:17:01.867088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.931 [2024-11-18 14:17:01.867372] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:09.931 [2024-11-18 14:17:01.867514] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:09.931 [2024-11-18 14:17:01.867738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:09.931 [2024-11-18 14:17:01.868300] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:09.931 [2024-11-18 14:17:01.868429] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:15:09.931 [2024-11-18 14:17:01.868728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.931 14:17:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.190 14:17:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.190 "name": "raid_bdev1", 00:15:10.190 "uuid": "cc3a34aa-8638-4e62-beb3-8f85068af931", 00:15:10.190 "strip_size_kb": 64, 00:15:10.190 "state": "online", 00:15:10.190 "raid_level": "concat", 00:15:10.190 "superblock": true, 00:15:10.190 "num_base_bdevs": 3, 00:15:10.190 "num_base_bdevs_discovered": 3, 00:15:10.190 "num_base_bdevs_operational": 3, 00:15:10.190 "base_bdevs_list": [ 00:15:10.190 { 00:15:10.190 "name": "pt1", 00:15:10.190 "uuid": "735e3c6b-3836-558b-9748-290ab19146d4", 00:15:10.190 "is_configured": true, 00:15:10.190 "data_offset": 2048, 00:15:10.190 "data_size": 63488 00:15:10.190 }, 00:15:10.190 { 00:15:10.190 "name": "pt2", 00:15:10.190 "uuid": "79bcdf7d-31a0-5ebd-91c2-5b7d063ab17e", 00:15:10.190 "is_configured": true, 00:15:10.190 "data_offset": 2048, 00:15:10.190 "data_size": 63488 00:15:10.190 }, 00:15:10.190 { 00:15:10.190 "name": "pt3", 00:15:10.190 "uuid": "29ae1b77-bd89-5e43-9837-acb030618bc1", 00:15:10.190 "is_configured": true, 00:15:10.190 "data_offset": 2048, 00:15:10.190 "data_size": 63488 00:15:10.190 } 00:15:10.190 ] 00:15:10.190 }' 00:15:10.190 14:17:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.190 14:17:02 -- common/autotest_common.sh@10 -- # set +x 00:15:10.757 14:17:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:10.757 14:17:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:11.015 [2024-11-18 14:17:02.937067] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.016 14:17:02 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cc3a34aa-8638-4e62-beb3-8f85068af931 00:15:11.016 14:17:02 -- bdev/bdev_raid.sh@380 -- # '[' -z cc3a34aa-8638-4e62-beb3-8f85068af931 ']' 00:15:11.016 14:17:02 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:11.275 [2024-11-18 14:17:03.128877] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.275 [2024-11-18 14:17:03.129027] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.275 [2024-11-18 14:17:03.129263] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.275 [2024-11-18 14:17:03.129448] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.275 [2024-11-18 14:17:03.129564] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:15:11.275 14:17:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.275 14:17:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:11.534 14:17:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:11.534 14:17:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:11.534 14:17:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.534 14:17:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:11.794 14:17:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.794 14:17:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:11.794 14:17:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.794 14:17:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:12.052 14:17:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:12.052 14:17:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:12.312 14:17:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:12.312 14:17:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:12.312 14:17:04 -- common/autotest_common.sh@650 -- # local es=0 00:15:12.312 14:17:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:12.312 14:17:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.312 14:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.312 14:17:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.312 14:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.312 14:17:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.312 14:17:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.312 14:17:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.312 14:17:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:12.312 14:17:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:12.571 [2024-11-18 14:17:04.405099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:12.571 [2024-11-18 14:17:04.407538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:12.571 [2024-11-18 14:17:04.407762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:12.571 [2024-11-18 14:17:04.407875] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:12.571 [2024-11-18 14:17:04.408148] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:12.571 [2024-11-18 14:17:04.408336] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:12.571 [2024-11-18 14:17:04.408509] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.571 [2024-11-18 14:17:04.408638] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:15:12.571 request: 00:15:12.571 { 00:15:12.571 "name": "raid_bdev1", 00:15:12.571 "raid_level": "concat", 00:15:12.571 "base_bdevs": [ 00:15:12.571 "malloc1", 00:15:12.571 "malloc2", 00:15:12.571 "malloc3" 00:15:12.571 ], 00:15:12.571 "superblock": false, 00:15:12.571 "strip_size_kb": 64, 00:15:12.571 "method": "bdev_raid_create", 00:15:12.571 "req_id": 1 00:15:12.571 } 00:15:12.571 Got JSON-RPC error response 00:15:12.571 response: 00:15:12.571 { 00:15:12.571 "code": -17, 00:15:12.571 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:12.571 } 00:15:12.571 14:17:04 -- common/autotest_common.sh@653 -- # es=1 00:15:12.571 14:17:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.571 14:17:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.571 14:17:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.571 14:17:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.571 14:17:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:12.571 14:17:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:12.571 14:17:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:12.571 14:17:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.830 [2024-11-18 14:17:04.785229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.830 [2024-11-18 14:17:04.785424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.830 [2024-11-18 14:17:04.785514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:12.830 [2024-11-18 14:17:04.785748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.830 [2024-11-18 14:17:04.788325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.830 [2024-11-18 14:17:04.788508] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.830 [2024-11-18 14:17:04.788728] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:12.830 [2024-11-18 14:17:04.788917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.830 pt1 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.830 14:17:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.088 14:17:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.088 "name": "raid_bdev1", 00:15:13.088 "uuid": "cc3a34aa-8638-4e62-beb3-8f85068af931", 00:15:13.088 "strip_size_kb": 64, 00:15:13.088 "state": "configuring", 00:15:13.088 "raid_level": "concat", 00:15:13.088 "superblock": true, 00:15:13.088 "num_base_bdevs": 3, 00:15:13.088 "num_base_bdevs_discovered": 1, 00:15:13.088 "num_base_bdevs_operational": 3, 00:15:13.088 "base_bdevs_list": [ 00:15:13.088 { 00:15:13.088 "name": "pt1", 00:15:13.088 "uuid": "735e3c6b-3836-558b-9748-290ab19146d4", 00:15:13.088 "is_configured": true, 00:15:13.088 "data_offset": 2048, 00:15:13.088 "data_size": 63488 00:15:13.088 }, 00:15:13.088 { 00:15:13.088 "name": null, 00:15:13.088 "uuid": "79bcdf7d-31a0-5ebd-91c2-5b7d063ab17e", 00:15:13.088 "is_configured": false, 00:15:13.088 "data_offset": 2048, 00:15:13.088 "data_size": 63488 00:15:13.088 }, 00:15:13.088 { 00:15:13.088 "name": null, 00:15:13.088 "uuid": "29ae1b77-bd89-5e43-9837-acb030618bc1", 00:15:13.088 "is_configured": false, 00:15:13.088 "data_offset": 2048, 00:15:13.088 "data_size": 63488 00:15:13.088 } 00:15:13.088 ] 00:15:13.088 }' 00:15:13.088 14:17:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.088 14:17:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.655 14:17:05 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:13.655 14:17:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.915 [2024-11-18 14:17:05.905431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.915 [2024-11-18 14:17:05.905649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.915 [2024-11-18 14:17:05.905832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:13.915 [2024-11-18 14:17:05.906024] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.915 [2024-11-18 14:17:05.906545] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.915 [2024-11-18 14:17:05.906728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.915 [2024-11-18 14:17:05.906971] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:13.915 [2024-11-18 14:17:05.907111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.915 pt2 00:15:13.915 14:17:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:14.173 [2024-11-18 14:17:06.105479] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.173 14:17:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.174 14:17:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.432 14:17:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.432 "name": "raid_bdev1", 00:15:14.432 "uuid": "cc3a34aa-8638-4e62-beb3-8f85068af931", 00:15:14.432 "strip_size_kb": 64, 00:15:14.432 "state": "configuring", 00:15:14.432 "raid_level": "concat", 00:15:14.432 "superblock": true, 00:15:14.432 "num_base_bdevs": 3, 00:15:14.432 "num_base_bdevs_discovered": 1, 00:15:14.432 "num_base_bdevs_operational": 3, 00:15:14.432 "base_bdevs_list": [ 00:15:14.432 { 00:15:14.432 "name": "pt1", 00:15:14.432 "uuid": "735e3c6b-3836-558b-9748-290ab19146d4", 00:15:14.432 "is_configured": true, 00:15:14.432 "data_offset": 2048, 00:15:14.432 "data_size": 63488 00:15:14.432 }, 00:15:14.432 { 00:15:14.432 "name": null, 00:15:14.432 "uuid": "79bcdf7d-31a0-5ebd-91c2-5b7d063ab17e", 00:15:14.432 "is_configured": false, 00:15:14.432 "data_offset": 2048, 00:15:14.432 "data_size": 63488 00:15:14.432 }, 00:15:14.432 { 00:15:14.432 "name": null, 00:15:14.432 "uuid": "29ae1b77-bd89-5e43-9837-acb030618bc1", 00:15:14.432 "is_configured": false, 00:15:14.432 "data_offset": 2048, 00:15:14.432 "data_size": 63488 00:15:14.432 } 00:15:14.432 ] 00:15:14.432 }' 00:15:14.432 14:17:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.432 14:17:06 -- common/autotest_common.sh@10 -- # set +x 00:15:14.999 14:17:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:14.999 14:17:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:14.999 14:17:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.258 [2024-11-18 14:17:07.169650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.258 [2024-11-18 14:17:07.169890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.258 [2024-11-18 14:17:07.169974] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:15.258 [2024-11-18 14:17:07.170212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.258 [2024-11-18 14:17:07.170770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.258 [2024-11-18 14:17:07.170955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.258 [2024-11-18 14:17:07.171190] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:15.258 [2024-11-18 14:17:07.171333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.258 pt2 00:15:15.258 14:17:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:15.258 14:17:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.258 14:17:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:15.516 [2024-11-18 14:17:07.365677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:15.516 [2024-11-18 14:17:07.365900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.516 [2024-11-18 14:17:07.365977] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:15.516 [2024-11-18 14:17:07.366201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.516 [2024-11-18 14:17:07.366608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.516 [2024-11-18 14:17:07.366803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:15.516 [2024-11-18 14:17:07.367014] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:15.516 [2024-11-18 14:17:07.367170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:15.516 [2024-11-18 14:17:07.367414] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:15.516 [2024-11-18 14:17:07.367531] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:15.516 [2024-11-18 14:17:07.367660] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:15.516 [2024-11-18 14:17:07.368010] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:15.516 [2024-11-18 14:17:07.368125] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:15.516 [2024-11-18 14:17:07.368325] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.516 pt3 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.516 14:17:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.516 "name": "raid_bdev1", 00:15:15.516 "uuid": "cc3a34aa-8638-4e62-beb3-8f85068af931", 00:15:15.516 "strip_size_kb": 64, 00:15:15.516 "state": "online", 00:15:15.516 "raid_level": "concat", 00:15:15.516 "superblock": true, 00:15:15.516 "num_base_bdevs": 3, 00:15:15.517 "num_base_bdevs_discovered": 3, 00:15:15.517 "num_base_bdevs_operational": 3, 00:15:15.517 "base_bdevs_list": [ 00:15:15.517 { 00:15:15.517 "name": "pt1", 00:15:15.517 "uuid": "735e3c6b-3836-558b-9748-290ab19146d4", 00:15:15.517 "is_configured": true, 00:15:15.517 "data_offset": 2048, 00:15:15.517 "data_size": 63488 00:15:15.517 }, 00:15:15.517 { 00:15:15.517 "name": "pt2", 00:15:15.517 "uuid": "79bcdf7d-31a0-5ebd-91c2-5b7d063ab17e", 00:15:15.517 "is_configured": true, 00:15:15.517 "data_offset": 2048, 00:15:15.517 "data_size": 63488 00:15:15.517 }, 00:15:15.517 { 00:15:15.517 "name": "pt3", 00:15:15.517 "uuid": "29ae1b77-bd89-5e43-9837-acb030618bc1", 00:15:15.517 "is_configured": true, 00:15:15.517 "data_offset": 2048, 00:15:15.517 "data_size": 63488 00:15:15.517 } 00:15:15.517 ] 00:15:15.517 }' 00:15:15.517 14:17:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.517 14:17:07 -- common/autotest_common.sh@10 -- # set +x 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:16.452 [2024-11-18 14:17:08.482069] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@430 -- # '[' cc3a34aa-8638-4e62-beb3-8f85068af931 '!=' cc3a34aa-8638-4e62-beb3-8f85068af931 ']' 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:16.452 14:17:08 -- bdev/bdev_raid.sh@511 -- # killprocess 126927 00:15:16.452 14:17:08 -- common/autotest_common.sh@936 -- # '[' -z 126927 ']' 00:15:16.452 14:17:08 -- common/autotest_common.sh@940 -- # kill -0 126927 00:15:16.452 14:17:08 -- common/autotest_common.sh@941 -- # uname 00:15:16.452 14:17:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.452 14:17:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126927 00:15:16.452 killing process with pid 126927 00:15:16.452 14:17:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:16.452 14:17:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:16.452 14:17:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126927' 00:15:16.452 14:17:08 -- common/autotest_common.sh@955 -- # kill 126927 00:15:16.452 14:17:08 -- common/autotest_common.sh@960 -- # wait 126927 00:15:16.452 [2024-11-18 14:17:08.521433] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.452 [2024-11-18 14:17:08.521504] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.452 [2024-11-18 14:17:08.521611] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.452 [2024-11-18 14:17:08.521667] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:16.711 [2024-11-18 14:17:08.560509] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.970 ************************************ 00:15:16.970 END TEST raid_superblock_test 00:15:16.970 ************************************ 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:16.970 00:15:16.970 real 0m9.434s 00:15:16.970 user 0m17.049s 00:15:16.970 sys 0m1.228s 00:15:16.970 14:17:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:16.970 14:17:08 -- common/autotest_common.sh@10 -- # set +x 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:15:16.970 14:17:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:16.970 14:17:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.970 14:17:08 -- common/autotest_common.sh@10 -- # set +x 00:15:16.970 ************************************ 00:15:16.970 START TEST raid_state_function_test 00:15:16.970 ************************************ 00:15:16.970 14:17:08 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=127228 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127228' 00:15:16.970 Process raid pid: 127228 00:15:16.970 14:17:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127228 /var/tmp/spdk-raid.sock 00:15:16.970 14:17:08 -- common/autotest_common.sh@829 -- # '[' -z 127228 ']' 00:15:16.970 14:17:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:16.970 14:17:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.970 14:17:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:16.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:16.970 14:17:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.970 14:17:08 -- common/autotest_common.sh@10 -- # set +x 00:15:16.970 [2024-11-18 14:17:08.976038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:16.970 [2024-11-18 14:17:08.976254] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.297 [2024-11-18 14:17:09.123479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.297 [2024-11-18 14:17:09.193517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.297 [2024-11-18 14:17:09.263844] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.878 14:17:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.878 14:17:09 -- common/autotest_common.sh@862 -- # return 0 00:15:17.878 14:17:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:18.137 [2024-11-18 14:17:10.074559] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.137 [2024-11-18 14:17:10.074655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.137 [2024-11-18 14:17:10.074688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:18.137 [2024-11-18 14:17:10.074739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:18.137 [2024-11-18 14:17:10.074759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:18.137 [2024-11-18 14:17:10.074836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.137 14:17:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.396 14:17:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.396 "name": "Existed_Raid", 00:15:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.396 "strip_size_kb": 0, 00:15:18.396 "state": "configuring", 00:15:18.396 "raid_level": "raid1", 00:15:18.396 "superblock": false, 00:15:18.396 "num_base_bdevs": 3, 00:15:18.396 "num_base_bdevs_discovered": 0, 00:15:18.396 "num_base_bdevs_operational": 3, 00:15:18.396 "base_bdevs_list": [ 00:15:18.396 { 00:15:18.396 "name": "BaseBdev1", 00:15:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.396 "is_configured": false, 00:15:18.396 "data_offset": 0, 00:15:18.396 "data_size": 0 00:15:18.396 }, 00:15:18.396 { 00:15:18.396 "name": "BaseBdev2", 00:15:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.396 "is_configured": false, 00:15:18.396 "data_offset": 0, 00:15:18.396 "data_size": 0 00:15:18.396 }, 00:15:18.396 { 00:15:18.396 "name": "BaseBdev3", 00:15:18.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.396 "is_configured": false, 00:15:18.396 "data_offset": 0, 00:15:18.396 "data_size": 0 00:15:18.396 } 00:15:18.396 ] 00:15:18.396 }' 00:15:18.396 14:17:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.396 14:17:10 -- common/autotest_common.sh@10 -- # set +x 00:15:18.964 14:17:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:19.223 [2024-11-18 14:17:11.074528] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:19.223 [2024-11-18 14:17:11.074561] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:19.223 14:17:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:19.223 [2024-11-18 14:17:11.262582] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.223 [2024-11-18 14:17:11.262637] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.223 [2024-11-18 14:17:11.262654] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.223 [2024-11-18 14:17:11.262684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.224 [2024-11-18 14:17:11.262697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.224 [2024-11-18 14:17:11.262732] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.224 14:17:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.483 [2024-11-18 14:17:11.521126] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.483 BaseBdev1 00:15:19.483 14:17:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:19.483 14:17:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:19.483 14:17:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:19.483 14:17:11 -- common/autotest_common.sh@899 -- # local i 00:15:19.483 14:17:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:19.483 14:17:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:19.483 14:17:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:19.742 14:17:11 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.001 [ 00:15:20.002 { 00:15:20.002 "name": "BaseBdev1", 00:15:20.002 "aliases": [ 00:15:20.002 "635cf125-3cc8-49fc-9816-eee70a67b8a8" 00:15:20.002 ], 00:15:20.002 "product_name": "Malloc disk", 00:15:20.002 "block_size": 512, 00:15:20.002 "num_blocks": 65536, 00:15:20.002 "uuid": "635cf125-3cc8-49fc-9816-eee70a67b8a8", 00:15:20.002 "assigned_rate_limits": { 00:15:20.002 "rw_ios_per_sec": 0, 00:15:20.002 "rw_mbytes_per_sec": 0, 00:15:20.002 "r_mbytes_per_sec": 0, 00:15:20.002 "w_mbytes_per_sec": 0 00:15:20.002 }, 00:15:20.002 "claimed": true, 00:15:20.002 "claim_type": "exclusive_write", 00:15:20.002 "zoned": false, 00:15:20.002 "supported_io_types": { 00:15:20.002 "read": true, 00:15:20.002 "write": true, 00:15:20.002 "unmap": true, 00:15:20.002 "write_zeroes": true, 00:15:20.002 "flush": true, 00:15:20.002 "reset": true, 00:15:20.002 "compare": false, 00:15:20.002 "compare_and_write": false, 00:15:20.002 "abort": true, 00:15:20.002 "nvme_admin": false, 00:15:20.002 "nvme_io": false 00:15:20.002 }, 00:15:20.002 "memory_domains": [ 00:15:20.002 { 00:15:20.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.002 "dma_device_type": 2 00:15:20.002 } 00:15:20.002 ], 00:15:20.002 "driver_specific": {} 00:15:20.002 } 00:15:20.002 ] 00:15:20.002 14:17:11 -- common/autotest_common.sh@905 -- # return 0 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.002 14:17:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.003 14:17:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.003 14:17:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.003 14:17:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.262 14:17:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.262 "name": "Existed_Raid", 00:15:20.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.262 "strip_size_kb": 0, 00:15:20.262 "state": "configuring", 00:15:20.262 "raid_level": "raid1", 00:15:20.262 "superblock": false, 00:15:20.262 "num_base_bdevs": 3, 00:15:20.262 "num_base_bdevs_discovered": 1, 00:15:20.262 "num_base_bdevs_operational": 3, 00:15:20.262 "base_bdevs_list": [ 00:15:20.262 { 00:15:20.262 "name": "BaseBdev1", 00:15:20.262 "uuid": "635cf125-3cc8-49fc-9816-eee70a67b8a8", 00:15:20.262 "is_configured": true, 00:15:20.262 "data_offset": 0, 00:15:20.262 "data_size": 65536 00:15:20.262 }, 00:15:20.262 { 00:15:20.263 "name": "BaseBdev2", 00:15:20.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.263 "is_configured": false, 00:15:20.263 "data_offset": 0, 00:15:20.263 "data_size": 0 00:15:20.263 }, 00:15:20.263 { 00:15:20.263 "name": "BaseBdev3", 00:15:20.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.263 "is_configured": false, 00:15:20.263 "data_offset": 0, 00:15:20.263 "data_size": 0 00:15:20.263 } 00:15:20.263 ] 00:15:20.263 }' 00:15:20.263 14:17:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.263 14:17:12 -- common/autotest_common.sh@10 -- # set +x 00:15:20.830 14:17:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:21.089 [2024-11-18 14:17:12.933378] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.089 [2024-11-18 14:17:12.933425] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:21.089 14:17:12 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:21.089 14:17:12 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:21.089 [2024-11-18 14:17:13.117471] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.089 [2024-11-18 14:17:13.119438] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.089 [2024-11-18 14:17:13.119499] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.089 [2024-11-18 14:17:13.119515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:21.089 [2024-11-18 14:17:13.119559] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.089 14:17:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.348 14:17:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.348 "name": "Existed_Raid", 00:15:21.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.348 "strip_size_kb": 0, 00:15:21.348 "state": "configuring", 00:15:21.348 "raid_level": "raid1", 00:15:21.348 "superblock": false, 00:15:21.348 "num_base_bdevs": 3, 00:15:21.348 "num_base_bdevs_discovered": 1, 00:15:21.348 "num_base_bdevs_operational": 3, 00:15:21.348 "base_bdevs_list": [ 00:15:21.348 { 00:15:21.348 "name": "BaseBdev1", 00:15:21.348 "uuid": "635cf125-3cc8-49fc-9816-eee70a67b8a8", 00:15:21.348 "is_configured": true, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 65536 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev2", 00:15:21.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.348 "is_configured": false, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 0 00:15:21.348 }, 00:15:21.348 { 00:15:21.348 "name": "BaseBdev3", 00:15:21.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.348 "is_configured": false, 00:15:21.348 "data_offset": 0, 00:15:21.348 "data_size": 0 00:15:21.348 } 00:15:21.348 ] 00:15:21.348 }' 00:15:21.348 14:17:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.348 14:17:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.914 14:17:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:22.173 [2024-11-18 14:17:14.126473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.173 BaseBdev2 00:15:22.173 14:17:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:22.173 14:17:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:22.173 14:17:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:22.173 14:17:14 -- common/autotest_common.sh@899 -- # local i 00:15:22.173 14:17:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:22.173 14:17:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:22.173 14:17:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.431 14:17:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:22.431 [ 00:15:22.431 { 00:15:22.431 "name": "BaseBdev2", 00:15:22.431 "aliases": [ 00:15:22.431 "309ab308-fc6c-4d79-8ee7-9169b2583e5d" 00:15:22.431 ], 00:15:22.431 "product_name": "Malloc disk", 00:15:22.431 "block_size": 512, 00:15:22.431 "num_blocks": 65536, 00:15:22.431 "uuid": "309ab308-fc6c-4d79-8ee7-9169b2583e5d", 00:15:22.431 "assigned_rate_limits": { 00:15:22.431 "rw_ios_per_sec": 0, 00:15:22.431 "rw_mbytes_per_sec": 0, 00:15:22.431 "r_mbytes_per_sec": 0, 00:15:22.431 "w_mbytes_per_sec": 0 00:15:22.431 }, 00:15:22.431 "claimed": true, 00:15:22.431 "claim_type": "exclusive_write", 00:15:22.431 "zoned": false, 00:15:22.431 "supported_io_types": { 00:15:22.431 "read": true, 00:15:22.431 "write": true, 00:15:22.431 "unmap": true, 00:15:22.431 "write_zeroes": true, 00:15:22.431 "flush": true, 00:15:22.431 "reset": true, 00:15:22.431 "compare": false, 00:15:22.431 "compare_and_write": false, 00:15:22.431 "abort": true, 00:15:22.431 "nvme_admin": false, 00:15:22.431 "nvme_io": false 00:15:22.431 }, 00:15:22.431 "memory_domains": [ 00:15:22.431 { 00:15:22.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.431 "dma_device_type": 2 00:15:22.431 } 00:15:22.431 ], 00:15:22.431 "driver_specific": {} 00:15:22.431 } 00:15:22.431 ] 00:15:22.690 14:17:14 -- common/autotest_common.sh@905 -- # return 0 00:15:22.690 14:17:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:22.690 14:17:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:22.690 14:17:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:22.690 14:17:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.690 14:17:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.690 14:17:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.691 "name": "Existed_Raid", 00:15:22.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.691 "strip_size_kb": 0, 00:15:22.691 "state": "configuring", 00:15:22.691 "raid_level": "raid1", 00:15:22.691 "superblock": false, 00:15:22.691 "num_base_bdevs": 3, 00:15:22.691 "num_base_bdevs_discovered": 2, 00:15:22.691 "num_base_bdevs_operational": 3, 00:15:22.691 "base_bdevs_list": [ 00:15:22.691 { 00:15:22.691 "name": "BaseBdev1", 00:15:22.691 "uuid": "635cf125-3cc8-49fc-9816-eee70a67b8a8", 00:15:22.691 "is_configured": true, 00:15:22.691 "data_offset": 0, 00:15:22.691 "data_size": 65536 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "name": "BaseBdev2", 00:15:22.691 "uuid": "309ab308-fc6c-4d79-8ee7-9169b2583e5d", 00:15:22.691 "is_configured": true, 00:15:22.691 "data_offset": 0, 00:15:22.691 "data_size": 65536 00:15:22.691 }, 00:15:22.691 { 00:15:22.691 "name": "BaseBdev3", 00:15:22.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.691 "is_configured": false, 00:15:22.691 "data_offset": 0, 00:15:22.691 "data_size": 0 00:15:22.691 } 00:15:22.691 ] 00:15:22.691 }' 00:15:22.691 14:17:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.691 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:15:23.258 14:17:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:23.518 [2024-11-18 14:17:15.479233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.518 [2024-11-18 14:17:15.479302] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:23.518 [2024-11-18 14:17:15.479313] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:23.518 [2024-11-18 14:17:15.479453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:23.518 [2024-11-18 14:17:15.479877] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:23.518 [2024-11-18 14:17:15.479902] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:23.518 [2024-11-18 14:17:15.480154] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.518 BaseBdev3 00:15:23.518 14:17:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:23.518 14:17:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:23.518 14:17:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:23.518 14:17:15 -- common/autotest_common.sh@899 -- # local i 00:15:23.518 14:17:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:23.518 14:17:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:23.518 14:17:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.777 14:17:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.035 [ 00:15:24.035 { 00:15:24.035 "name": "BaseBdev3", 00:15:24.035 "aliases": [ 00:15:24.035 "76cb9ff4-9048-42b9-8569-c0706be486d5" 00:15:24.035 ], 00:15:24.035 "product_name": "Malloc disk", 00:15:24.035 "block_size": 512, 00:15:24.035 "num_blocks": 65536, 00:15:24.035 "uuid": "76cb9ff4-9048-42b9-8569-c0706be486d5", 00:15:24.035 "assigned_rate_limits": { 00:15:24.035 "rw_ios_per_sec": 0, 00:15:24.035 "rw_mbytes_per_sec": 0, 00:15:24.035 "r_mbytes_per_sec": 0, 00:15:24.035 "w_mbytes_per_sec": 0 00:15:24.035 }, 00:15:24.035 "claimed": true, 00:15:24.035 "claim_type": "exclusive_write", 00:15:24.035 "zoned": false, 00:15:24.035 "supported_io_types": { 00:15:24.035 "read": true, 00:15:24.035 "write": true, 00:15:24.035 "unmap": true, 00:15:24.035 "write_zeroes": true, 00:15:24.035 "flush": true, 00:15:24.035 "reset": true, 00:15:24.035 "compare": false, 00:15:24.035 "compare_and_write": false, 00:15:24.035 "abort": true, 00:15:24.035 "nvme_admin": false, 00:15:24.035 "nvme_io": false 00:15:24.035 }, 00:15:24.035 "memory_domains": [ 00:15:24.035 { 00:15:24.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.035 "dma_device_type": 2 00:15:24.035 } 00:15:24.035 ], 00:15:24.036 "driver_specific": {} 00:15:24.036 } 00:15:24.036 ] 00:15:24.036 14:17:15 -- common/autotest_common.sh@905 -- # return 0 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.036 14:17:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.036 14:17:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.036 "name": "Existed_Raid", 00:15:24.036 "uuid": "6cbb0733-7809-40a3-8ba0-b598de15e289", 00:15:24.036 "strip_size_kb": 0, 00:15:24.036 "state": "online", 00:15:24.036 "raid_level": "raid1", 00:15:24.036 "superblock": false, 00:15:24.036 "num_base_bdevs": 3, 00:15:24.036 "num_base_bdevs_discovered": 3, 00:15:24.036 "num_base_bdevs_operational": 3, 00:15:24.036 "base_bdevs_list": [ 00:15:24.036 { 00:15:24.036 "name": "BaseBdev1", 00:15:24.036 "uuid": "635cf125-3cc8-49fc-9816-eee70a67b8a8", 00:15:24.036 "is_configured": true, 00:15:24.036 "data_offset": 0, 00:15:24.036 "data_size": 65536 00:15:24.036 }, 00:15:24.036 { 00:15:24.036 "name": "BaseBdev2", 00:15:24.036 "uuid": "309ab308-fc6c-4d79-8ee7-9169b2583e5d", 00:15:24.036 "is_configured": true, 00:15:24.036 "data_offset": 0, 00:15:24.036 "data_size": 65536 00:15:24.036 }, 00:15:24.036 { 00:15:24.036 "name": "BaseBdev3", 00:15:24.036 "uuid": "76cb9ff4-9048-42b9-8569-c0706be486d5", 00:15:24.036 "is_configured": true, 00:15:24.036 "data_offset": 0, 00:15:24.036 "data_size": 65536 00:15:24.036 } 00:15:24.036 ] 00:15:24.036 }' 00:15:24.036 14:17:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.036 14:17:16 -- common/autotest_common.sh@10 -- # set +x 00:15:24.604 14:17:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:24.863 [2024-11-18 14:17:16.843607] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.863 14:17:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.122 14:17:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.122 "name": "Existed_Raid", 00:15:25.122 "uuid": "6cbb0733-7809-40a3-8ba0-b598de15e289", 00:15:25.122 "strip_size_kb": 0, 00:15:25.122 "state": "online", 00:15:25.122 "raid_level": "raid1", 00:15:25.122 "superblock": false, 00:15:25.122 "num_base_bdevs": 3, 00:15:25.122 "num_base_bdevs_discovered": 2, 00:15:25.122 "num_base_bdevs_operational": 2, 00:15:25.122 "base_bdevs_list": [ 00:15:25.122 { 00:15:25.122 "name": null, 00:15:25.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.122 "is_configured": false, 00:15:25.122 "data_offset": 0, 00:15:25.122 "data_size": 65536 00:15:25.122 }, 00:15:25.122 { 00:15:25.122 "name": "BaseBdev2", 00:15:25.122 "uuid": "309ab308-fc6c-4d79-8ee7-9169b2583e5d", 00:15:25.122 "is_configured": true, 00:15:25.122 "data_offset": 0, 00:15:25.122 "data_size": 65536 00:15:25.122 }, 00:15:25.122 { 00:15:25.122 "name": "BaseBdev3", 00:15:25.122 "uuid": "76cb9ff4-9048-42b9-8569-c0706be486d5", 00:15:25.122 "is_configured": true, 00:15:25.122 "data_offset": 0, 00:15:25.122 "data_size": 65536 00:15:25.122 } 00:15:25.122 ] 00:15:25.122 }' 00:15:25.122 14:17:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.122 14:17:17 -- common/autotest_common.sh@10 -- # set +x 00:15:25.689 14:17:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:25.689 14:17:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:25.689 14:17:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.689 14:17:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:25.948 14:17:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:25.948 14:17:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:25.948 14:17:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:26.210 [2024-11-18 14:17:18.197157] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.210 14:17:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.210 14:17:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.210 14:17:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.210 14:17:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:26.469 14:17:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:26.469 14:17:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.469 14:17:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:26.727 [2024-11-18 14:17:18.634977] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.727 [2024-11-18 14:17:18.635006] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.727 [2024-11-18 14:17:18.635078] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.727 [2024-11-18 14:17:18.644382] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.727 [2024-11-18 14:17:18.644406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:26.727 14:17:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.727 14:17:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.727 14:17:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.727 14:17:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.986 14:17:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:26.986 14:17:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:26.986 14:17:18 -- bdev/bdev_raid.sh@287 -- # killprocess 127228 00:15:26.986 14:17:18 -- common/autotest_common.sh@936 -- # '[' -z 127228 ']' 00:15:26.986 14:17:18 -- common/autotest_common.sh@940 -- # kill -0 127228 00:15:26.986 14:17:18 -- common/autotest_common.sh@941 -- # uname 00:15:26.986 14:17:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.986 14:17:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127228 00:15:26.986 14:17:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:26.986 14:17:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:26.986 14:17:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127228' 00:15:26.986 killing process with pid 127228 00:15:26.986 14:17:18 -- common/autotest_common.sh@955 -- # kill 127228 00:15:26.986 [2024-11-18 14:17:18.924489] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.986 [2024-11-18 14:17:18.924574] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.986 14:17:18 -- common/autotest_common.sh@960 -- # wait 127228 00:15:27.245 14:17:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:27.245 00:15:27.245 real 0m10.226s 00:15:27.245 user 0m18.725s 00:15:27.245 sys 0m1.346s 00:15:27.245 14:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:27.245 14:17:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.245 ************************************ 00:15:27.245 END TEST raid_state_function_test 00:15:27.245 ************************************ 00:15:27.245 14:17:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:27.245 14:17:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:27.245 14:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.245 14:17:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.245 ************************************ 00:15:27.246 START TEST raid_state_function_test_sb 00:15:27.246 ************************************ 00:15:27.246 14:17:19 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=127585 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:27.246 Process raid pid: 127585 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127585' 00:15:27.246 14:17:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127585 /var/tmp/spdk-raid.sock 00:15:27.246 14:17:19 -- common/autotest_common.sh@829 -- # '[' -z 127585 ']' 00:15:27.246 14:17:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:27.246 14:17:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:27.246 14:17:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:27.246 14:17:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.246 14:17:19 -- common/autotest_common.sh@10 -- # set +x 00:15:27.246 [2024-11-18 14:17:19.250834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:27.246 [2024-11-18 14:17:19.250998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.504 [2024-11-18 14:17:19.395510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.504 [2024-11-18 14:17:19.478177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.504 [2024-11-18 14:17:19.558497] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.439 14:17:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.439 14:17:20 -- common/autotest_common.sh@862 -- # return 0 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:28.439 [2024-11-18 14:17:20.377348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.439 [2024-11-18 14:17:20.377552] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.439 [2024-11-18 14:17:20.377658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.439 [2024-11-18 14:17:20.377718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.439 [2024-11-18 14:17:20.377809] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.439 [2024-11-18 14:17:20.377911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.439 14:17:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.698 14:17:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.698 "name": "Existed_Raid", 00:15:28.698 "uuid": "9392bbb3-1844-487a-8d56-da676402bf9e", 00:15:28.698 "strip_size_kb": 0, 00:15:28.698 "state": "configuring", 00:15:28.698 "raid_level": "raid1", 00:15:28.698 "superblock": true, 00:15:28.698 "num_base_bdevs": 3, 00:15:28.698 "num_base_bdevs_discovered": 0, 00:15:28.698 "num_base_bdevs_operational": 3, 00:15:28.698 "base_bdevs_list": [ 00:15:28.698 { 00:15:28.698 "name": "BaseBdev1", 00:15:28.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.698 "is_configured": false, 00:15:28.698 "data_offset": 0, 00:15:28.698 "data_size": 0 00:15:28.698 }, 00:15:28.698 { 00:15:28.698 "name": "BaseBdev2", 00:15:28.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.698 "is_configured": false, 00:15:28.698 "data_offset": 0, 00:15:28.698 "data_size": 0 00:15:28.698 }, 00:15:28.698 { 00:15:28.698 "name": "BaseBdev3", 00:15:28.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.698 "is_configured": false, 00:15:28.698 "data_offset": 0, 00:15:28.698 "data_size": 0 00:15:28.698 } 00:15:28.698 ] 00:15:28.698 }' 00:15:28.698 14:17:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.698 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:15:29.265 14:17:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.524 [2024-11-18 14:17:21.377436] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.524 [2024-11-18 14:17:21.377582] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:29.524 14:17:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.783 [2024-11-18 14:17:21.633529] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.783 [2024-11-18 14:17:21.633721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.784 [2024-11-18 14:17:21.633822] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.784 [2024-11-18 14:17:21.633984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.784 [2024-11-18 14:17:21.634096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.784 [2024-11-18 14:17:21.634299] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.784 14:17:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.784 [2024-11-18 14:17:21.831888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.784 BaseBdev1 00:15:29.784 14:17:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:29.784 14:17:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:29.784 14:17:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:29.784 14:17:21 -- common/autotest_common.sh@899 -- # local i 00:15:29.784 14:17:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:29.784 14:17:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:29.784 14:17:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.042 14:17:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.301 [ 00:15:30.301 { 00:15:30.301 "name": "BaseBdev1", 00:15:30.301 "aliases": [ 00:15:30.301 "bcff6234-27f7-46a7-a846-219701f8438e" 00:15:30.301 ], 00:15:30.301 "product_name": "Malloc disk", 00:15:30.301 "block_size": 512, 00:15:30.301 "num_blocks": 65536, 00:15:30.301 "uuid": "bcff6234-27f7-46a7-a846-219701f8438e", 00:15:30.301 "assigned_rate_limits": { 00:15:30.301 "rw_ios_per_sec": 0, 00:15:30.301 "rw_mbytes_per_sec": 0, 00:15:30.301 "r_mbytes_per_sec": 0, 00:15:30.301 "w_mbytes_per_sec": 0 00:15:30.301 }, 00:15:30.301 "claimed": true, 00:15:30.301 "claim_type": "exclusive_write", 00:15:30.301 "zoned": false, 00:15:30.301 "supported_io_types": { 00:15:30.301 "read": true, 00:15:30.301 "write": true, 00:15:30.301 "unmap": true, 00:15:30.301 "write_zeroes": true, 00:15:30.301 "flush": true, 00:15:30.301 "reset": true, 00:15:30.301 "compare": false, 00:15:30.301 "compare_and_write": false, 00:15:30.301 "abort": true, 00:15:30.301 "nvme_admin": false, 00:15:30.301 "nvme_io": false 00:15:30.301 }, 00:15:30.301 "memory_domains": [ 00:15:30.301 { 00:15:30.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.301 "dma_device_type": 2 00:15:30.301 } 00:15:30.301 ], 00:15:30.301 "driver_specific": {} 00:15:30.301 } 00:15:30.301 ] 00:15:30.301 14:17:22 -- common/autotest_common.sh@905 -- # return 0 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.301 14:17:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.559 14:17:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.559 "name": "Existed_Raid", 00:15:30.559 "uuid": "12af98c2-7de9-4405-b074-e10ef05d030d", 00:15:30.559 "strip_size_kb": 0, 00:15:30.559 "state": "configuring", 00:15:30.559 "raid_level": "raid1", 00:15:30.559 "superblock": true, 00:15:30.559 "num_base_bdevs": 3, 00:15:30.559 "num_base_bdevs_discovered": 1, 00:15:30.559 "num_base_bdevs_operational": 3, 00:15:30.559 "base_bdevs_list": [ 00:15:30.559 { 00:15:30.559 "name": "BaseBdev1", 00:15:30.559 "uuid": "bcff6234-27f7-46a7-a846-219701f8438e", 00:15:30.559 "is_configured": true, 00:15:30.559 "data_offset": 2048, 00:15:30.559 "data_size": 63488 00:15:30.559 }, 00:15:30.559 { 00:15:30.559 "name": "BaseBdev2", 00:15:30.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.559 "is_configured": false, 00:15:30.559 "data_offset": 0, 00:15:30.560 "data_size": 0 00:15:30.560 }, 00:15:30.560 { 00:15:30.560 "name": "BaseBdev3", 00:15:30.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.560 "is_configured": false, 00:15:30.560 "data_offset": 0, 00:15:30.560 "data_size": 0 00:15:30.560 } 00:15:30.560 ] 00:15:30.560 }' 00:15:30.560 14:17:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.560 14:17:22 -- common/autotest_common.sh@10 -- # set +x 00:15:31.127 14:17:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.387 [2024-11-18 14:17:23.304160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.387 [2024-11-18 14:17:23.304318] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:31.387 14:17:23 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:31.387 14:17:23 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:31.646 14:17:23 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.906 BaseBdev1 00:15:31.906 14:17:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:31.906 14:17:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:31.906 14:17:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:31.906 14:17:23 -- common/autotest_common.sh@899 -- # local i 00:15:31.906 14:17:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:31.906 14:17:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:31.906 14:17:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.906 14:17:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.164 [ 00:15:32.164 { 00:15:32.164 "name": "BaseBdev1", 00:15:32.164 "aliases": [ 00:15:32.164 "225c2271-1b7d-452e-adda-9c6bcd514a3c" 00:15:32.164 ], 00:15:32.164 "product_name": "Malloc disk", 00:15:32.164 "block_size": 512, 00:15:32.164 "num_blocks": 65536, 00:15:32.164 "uuid": "225c2271-1b7d-452e-adda-9c6bcd514a3c", 00:15:32.164 "assigned_rate_limits": { 00:15:32.164 "rw_ios_per_sec": 0, 00:15:32.164 "rw_mbytes_per_sec": 0, 00:15:32.164 "r_mbytes_per_sec": 0, 00:15:32.164 "w_mbytes_per_sec": 0 00:15:32.164 }, 00:15:32.164 "claimed": false, 00:15:32.164 "zoned": false, 00:15:32.164 "supported_io_types": { 00:15:32.164 "read": true, 00:15:32.164 "write": true, 00:15:32.164 "unmap": true, 00:15:32.165 "write_zeroes": true, 00:15:32.165 "flush": true, 00:15:32.165 "reset": true, 00:15:32.165 "compare": false, 00:15:32.165 "compare_and_write": false, 00:15:32.165 "abort": true, 00:15:32.165 "nvme_admin": false, 00:15:32.165 "nvme_io": false 00:15:32.165 }, 00:15:32.165 "memory_domains": [ 00:15:32.165 { 00:15:32.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.165 "dma_device_type": 2 00:15:32.165 } 00:15:32.165 ], 00:15:32.165 "driver_specific": {} 00:15:32.165 } 00:15:32.165 ] 00:15:32.165 14:17:24 -- common/autotest_common.sh@905 -- # return 0 00:15:32.165 14:17:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:32.424 [2024-11-18 14:17:24.387387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.424 [2024-11-18 14:17:24.389909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.424 [2024-11-18 14:17:24.390077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.424 [2024-11-18 14:17:24.390179] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.424 [2024-11-18 14:17:24.390246] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.424 14:17:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.683 14:17:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.683 "name": "Existed_Raid", 00:15:32.683 "uuid": "6b31cfa5-5ea4-439e-9a14-55856d513732", 00:15:32.683 "strip_size_kb": 0, 00:15:32.683 "state": "configuring", 00:15:32.683 "raid_level": "raid1", 00:15:32.683 "superblock": true, 00:15:32.683 "num_base_bdevs": 3, 00:15:32.683 "num_base_bdevs_discovered": 1, 00:15:32.683 "num_base_bdevs_operational": 3, 00:15:32.683 "base_bdevs_list": [ 00:15:32.683 { 00:15:32.683 "name": "BaseBdev1", 00:15:32.683 "uuid": "225c2271-1b7d-452e-adda-9c6bcd514a3c", 00:15:32.683 "is_configured": true, 00:15:32.683 "data_offset": 2048, 00:15:32.683 "data_size": 63488 00:15:32.683 }, 00:15:32.683 { 00:15:32.683 "name": "BaseBdev2", 00:15:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.683 "is_configured": false, 00:15:32.683 "data_offset": 0, 00:15:32.683 "data_size": 0 00:15:32.683 }, 00:15:32.683 { 00:15:32.683 "name": "BaseBdev3", 00:15:32.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.683 "is_configured": false, 00:15:32.683 "data_offset": 0, 00:15:32.683 "data_size": 0 00:15:32.683 } 00:15:32.683 ] 00:15:32.683 }' 00:15:32.683 14:17:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.683 14:17:24 -- common/autotest_common.sh@10 -- # set +x 00:15:33.249 14:17:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.509 [2024-11-18 14:17:25.428346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.509 BaseBdev2 00:15:33.509 14:17:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:33.509 14:17:25 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:33.509 14:17:25 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.509 14:17:25 -- common/autotest_common.sh@899 -- # local i 00:15:33.509 14:17:25 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.509 14:17:25 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.509 14:17:25 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.768 14:17:25 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.027 [ 00:15:34.027 { 00:15:34.027 "name": "BaseBdev2", 00:15:34.027 "aliases": [ 00:15:34.027 "10318c09-9e59-44a8-9af9-a9ec566589b7" 00:15:34.027 ], 00:15:34.027 "product_name": "Malloc disk", 00:15:34.027 "block_size": 512, 00:15:34.027 "num_blocks": 65536, 00:15:34.027 "uuid": "10318c09-9e59-44a8-9af9-a9ec566589b7", 00:15:34.027 "assigned_rate_limits": { 00:15:34.027 "rw_ios_per_sec": 0, 00:15:34.027 "rw_mbytes_per_sec": 0, 00:15:34.027 "r_mbytes_per_sec": 0, 00:15:34.027 "w_mbytes_per_sec": 0 00:15:34.027 }, 00:15:34.027 "claimed": true, 00:15:34.027 "claim_type": "exclusive_write", 00:15:34.027 "zoned": false, 00:15:34.027 "supported_io_types": { 00:15:34.027 "read": true, 00:15:34.027 "write": true, 00:15:34.027 "unmap": true, 00:15:34.027 "write_zeroes": true, 00:15:34.027 "flush": true, 00:15:34.027 "reset": true, 00:15:34.027 "compare": false, 00:15:34.027 "compare_and_write": false, 00:15:34.027 "abort": true, 00:15:34.027 "nvme_admin": false, 00:15:34.027 "nvme_io": false 00:15:34.027 }, 00:15:34.027 "memory_domains": [ 00:15:34.027 { 00:15:34.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.027 "dma_device_type": 2 00:15:34.027 } 00:15:34.027 ], 00:15:34.027 "driver_specific": {} 00:15:34.027 } 00:15:34.027 ] 00:15:34.027 14:17:25 -- common/autotest_common.sh@905 -- # return 0 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.027 14:17:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.286 14:17:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.286 "name": "Existed_Raid", 00:15:34.286 "uuid": "6b31cfa5-5ea4-439e-9a14-55856d513732", 00:15:34.286 "strip_size_kb": 0, 00:15:34.286 "state": "configuring", 00:15:34.286 "raid_level": "raid1", 00:15:34.286 "superblock": true, 00:15:34.286 "num_base_bdevs": 3, 00:15:34.286 "num_base_bdevs_discovered": 2, 00:15:34.286 "num_base_bdevs_operational": 3, 00:15:34.286 "base_bdevs_list": [ 00:15:34.286 { 00:15:34.286 "name": "BaseBdev1", 00:15:34.286 "uuid": "225c2271-1b7d-452e-adda-9c6bcd514a3c", 00:15:34.286 "is_configured": true, 00:15:34.286 "data_offset": 2048, 00:15:34.286 "data_size": 63488 00:15:34.286 }, 00:15:34.286 { 00:15:34.286 "name": "BaseBdev2", 00:15:34.286 "uuid": "10318c09-9e59-44a8-9af9-a9ec566589b7", 00:15:34.286 "is_configured": true, 00:15:34.286 "data_offset": 2048, 00:15:34.286 "data_size": 63488 00:15:34.286 }, 00:15:34.286 { 00:15:34.286 "name": "BaseBdev3", 00:15:34.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.286 "is_configured": false, 00:15:34.286 "data_offset": 0, 00:15:34.286 "data_size": 0 00:15:34.286 } 00:15:34.286 ] 00:15:34.286 }' 00:15:34.286 14:17:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.286 14:17:26 -- common/autotest_common.sh@10 -- # set +x 00:15:34.853 14:17:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:34.853 [2024-11-18 14:17:26.876273] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.853 [2024-11-18 14:17:26.876489] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:34.853 [2024-11-18 14:17:26.876503] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:34.853 [2024-11-18 14:17:26.876670] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:34.853 [2024-11-18 14:17:26.877061] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:34.853 [2024-11-18 14:17:26.877082] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:34.853 [2024-11-18 14:17:26.877221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.853 BaseBdev3 00:15:34.853 14:17:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:34.853 14:17:26 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:34.853 14:17:26 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:34.853 14:17:26 -- common/autotest_common.sh@899 -- # local i 00:15:34.853 14:17:26 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:34.853 14:17:26 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:34.853 14:17:26 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.112 14:17:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.371 [ 00:15:35.371 { 00:15:35.371 "name": "BaseBdev3", 00:15:35.371 "aliases": [ 00:15:35.371 "edb43572-6b7e-4185-b172-a6cb94698f1c" 00:15:35.371 ], 00:15:35.371 "product_name": "Malloc disk", 00:15:35.371 "block_size": 512, 00:15:35.371 "num_blocks": 65536, 00:15:35.371 "uuid": "edb43572-6b7e-4185-b172-a6cb94698f1c", 00:15:35.371 "assigned_rate_limits": { 00:15:35.371 "rw_ios_per_sec": 0, 00:15:35.371 "rw_mbytes_per_sec": 0, 00:15:35.371 "r_mbytes_per_sec": 0, 00:15:35.371 "w_mbytes_per_sec": 0 00:15:35.371 }, 00:15:35.371 "claimed": true, 00:15:35.371 "claim_type": "exclusive_write", 00:15:35.371 "zoned": false, 00:15:35.371 "supported_io_types": { 00:15:35.371 "read": true, 00:15:35.371 "write": true, 00:15:35.371 "unmap": true, 00:15:35.371 "write_zeroes": true, 00:15:35.371 "flush": true, 00:15:35.371 "reset": true, 00:15:35.371 "compare": false, 00:15:35.371 "compare_and_write": false, 00:15:35.371 "abort": true, 00:15:35.371 "nvme_admin": false, 00:15:35.371 "nvme_io": false 00:15:35.371 }, 00:15:35.371 "memory_domains": [ 00:15:35.371 { 00:15:35.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.371 "dma_device_type": 2 00:15:35.371 } 00:15:35.371 ], 00:15:35.371 "driver_specific": {} 00:15:35.371 } 00:15:35.371 ] 00:15:35.371 14:17:27 -- common/autotest_common.sh@905 -- # return 0 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.371 14:17:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.630 14:17:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.630 "name": "Existed_Raid", 00:15:35.630 "uuid": "6b31cfa5-5ea4-439e-9a14-55856d513732", 00:15:35.630 "strip_size_kb": 0, 00:15:35.630 "state": "online", 00:15:35.630 "raid_level": "raid1", 00:15:35.630 "superblock": true, 00:15:35.630 "num_base_bdevs": 3, 00:15:35.630 "num_base_bdevs_discovered": 3, 00:15:35.630 "num_base_bdevs_operational": 3, 00:15:35.630 "base_bdevs_list": [ 00:15:35.630 { 00:15:35.630 "name": "BaseBdev1", 00:15:35.630 "uuid": "225c2271-1b7d-452e-adda-9c6bcd514a3c", 00:15:35.630 "is_configured": true, 00:15:35.630 "data_offset": 2048, 00:15:35.630 "data_size": 63488 00:15:35.630 }, 00:15:35.630 { 00:15:35.630 "name": "BaseBdev2", 00:15:35.630 "uuid": "10318c09-9e59-44a8-9af9-a9ec566589b7", 00:15:35.630 "is_configured": true, 00:15:35.630 "data_offset": 2048, 00:15:35.630 "data_size": 63488 00:15:35.630 }, 00:15:35.630 { 00:15:35.630 "name": "BaseBdev3", 00:15:35.630 "uuid": "edb43572-6b7e-4185-b172-a6cb94698f1c", 00:15:35.630 "is_configured": true, 00:15:35.630 "data_offset": 2048, 00:15:35.630 "data_size": 63488 00:15:35.630 } 00:15:35.630 ] 00:15:35.630 }' 00:15:35.630 14:17:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.630 14:17:27 -- common/autotest_common.sh@10 -- # set +x 00:15:36.196 14:17:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:36.455 [2024-11-18 14:17:28.422943] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.455 14:17:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.713 14:17:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.713 "name": "Existed_Raid", 00:15:36.713 "uuid": "6b31cfa5-5ea4-439e-9a14-55856d513732", 00:15:36.713 "strip_size_kb": 0, 00:15:36.713 "state": "online", 00:15:36.713 "raid_level": "raid1", 00:15:36.713 "superblock": true, 00:15:36.713 "num_base_bdevs": 3, 00:15:36.713 "num_base_bdevs_discovered": 2, 00:15:36.713 "num_base_bdevs_operational": 2, 00:15:36.713 "base_bdevs_list": [ 00:15:36.713 { 00:15:36.713 "name": null, 00:15:36.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.713 "is_configured": false, 00:15:36.713 "data_offset": 2048, 00:15:36.713 "data_size": 63488 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "name": "BaseBdev2", 00:15:36.713 "uuid": "10318c09-9e59-44a8-9af9-a9ec566589b7", 00:15:36.713 "is_configured": true, 00:15:36.713 "data_offset": 2048, 00:15:36.713 "data_size": 63488 00:15:36.713 }, 00:15:36.713 { 00:15:36.713 "name": "BaseBdev3", 00:15:36.713 "uuid": "edb43572-6b7e-4185-b172-a6cb94698f1c", 00:15:36.713 "is_configured": true, 00:15:36.713 "data_offset": 2048, 00:15:36.713 "data_size": 63488 00:15:36.713 } 00:15:36.713 ] 00:15:36.713 }' 00:15:36.713 14:17:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.713 14:17:28 -- common/autotest_common.sh@10 -- # set +x 00:15:37.284 14:17:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:37.284 14:17:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:37.285 14:17:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.552 14:17:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:37.552 14:17:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:37.552 14:17:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.552 14:17:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.810 [2024-11-18 14:17:29.775258] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.810 14:17:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:37.810 14:17:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:37.810 14:17:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.810 14:17:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:38.068 14:17:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:38.068 14:17:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.068 14:17:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:38.326 [2024-11-18 14:17:30.232023] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.326 [2024-11-18 14:17:30.232053] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.327 [2024-11-18 14:17:30.232121] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.327 [2024-11-18 14:17:30.244303] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.327 [2024-11-18 14:17:30.244330] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:38.327 14:17:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:38.327 14:17:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:38.327 14:17:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.327 14:17:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.585 14:17:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:38.585 14:17:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:38.585 14:17:30 -- bdev/bdev_raid.sh@287 -- # killprocess 127585 00:15:38.585 14:17:30 -- common/autotest_common.sh@936 -- # '[' -z 127585 ']' 00:15:38.585 14:17:30 -- common/autotest_common.sh@940 -- # kill -0 127585 00:15:38.585 14:17:30 -- common/autotest_common.sh@941 -- # uname 00:15:38.585 14:17:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.585 14:17:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127585 00:15:38.585 killing process with pid 127585 00:15:38.585 14:17:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.585 14:17:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.585 14:17:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127585' 00:15:38.585 14:17:30 -- common/autotest_common.sh@955 -- # kill 127585 00:15:38.585 14:17:30 -- common/autotest_common.sh@960 -- # wait 127585 00:15:38.585 [2024-11-18 14:17:30.511103] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:38.585 [2024-11-18 14:17:30.511202] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.844 ************************************ 00:15:38.844 END TEST raid_state_function_test_sb 00:15:38.844 ************************************ 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:38.844 00:15:38.844 real 0m11.599s 00:15:38.844 user 0m21.188s 00:15:38.844 sys 0m1.496s 00:15:38.844 14:17:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:38.844 14:17:30 -- common/autotest_common.sh@10 -- # set +x 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:38.844 14:17:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:38.844 14:17:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.844 14:17:30 -- common/autotest_common.sh@10 -- # set +x 00:15:38.844 ************************************ 00:15:38.844 START TEST raid_superblock_test 00:15:38.844 ************************************ 00:15:38.844 14:17:30 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@357 -- # raid_pid=127965 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127965 /var/tmp/spdk-raid.sock 00:15:38.844 14:17:30 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:38.844 14:17:30 -- common/autotest_common.sh@829 -- # '[' -z 127965 ']' 00:15:38.844 14:17:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.844 14:17:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.844 14:17:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.844 14:17:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.844 14:17:30 -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 [2024-11-18 14:17:30.910823] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:38.845 [2024-11-18 14:17:30.911013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127965 ] 00:15:39.103 [2024-11-18 14:17:31.052465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.103 [2024-11-18 14:17:31.117234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.362 [2024-11-18 14:17:31.179569] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.931 14:17:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.931 14:17:31 -- common/autotest_common.sh@862 -- # return 0 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:39.931 14:17:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:40.190 malloc1 00:15:40.190 14:17:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.190 [2024-11-18 14:17:32.240680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.190 [2024-11-18 14:17:32.240788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.190 [2024-11-18 14:17:32.240831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:40.190 [2024-11-18 14:17:32.240874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.190 [2024-11-18 14:17:32.243114] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.190 [2024-11-18 14:17:32.243210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.190 pt1 00:15:40.190 14:17:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:40.190 14:17:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:40.190 14:17:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:40.191 14:17:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:40.191 14:17:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:40.191 14:17:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.191 14:17:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.191 14:17:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.191 14:17:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:40.450 malloc2 00:15:40.450 14:17:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.709 [2024-11-18 14:17:32.658498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.709 [2024-11-18 14:17:32.658576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.709 [2024-11-18 14:17:32.658611] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:40.709 [2024-11-18 14:17:32.658652] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.709 [2024-11-18 14:17:32.660865] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.709 [2024-11-18 14:17:32.660917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.709 pt2 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:40.709 14:17:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:40.967 malloc3 00:15:40.967 14:17:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:41.226 [2024-11-18 14:17:33.057404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:41.226 [2024-11-18 14:17:33.057478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.226 [2024-11-18 14:17:33.057514] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.226 [2024-11-18 14:17:33.057556] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.226 [2024-11-18 14:17:33.059813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.226 [2024-11-18 14:17:33.059865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:41.226 pt3 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:41.226 [2024-11-18 14:17:33.249517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.226 [2024-11-18 14:17:33.251519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:41.226 [2024-11-18 14:17:33.251617] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:41.226 [2024-11-18 14:17:33.251870] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:41.226 [2024-11-18 14:17:33.251894] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:41.226 [2024-11-18 14:17:33.252041] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:41.226 [2024-11-18 14:17:33.252451] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:41.226 [2024-11-18 14:17:33.252475] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:15:41.226 [2024-11-18 14:17:33.252649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.226 14:17:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.485 14:17:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.485 "name": "raid_bdev1", 00:15:41.485 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:41.485 "strip_size_kb": 0, 00:15:41.485 "state": "online", 00:15:41.485 "raid_level": "raid1", 00:15:41.485 "superblock": true, 00:15:41.485 "num_base_bdevs": 3, 00:15:41.485 "num_base_bdevs_discovered": 3, 00:15:41.485 "num_base_bdevs_operational": 3, 00:15:41.485 "base_bdevs_list": [ 00:15:41.485 { 00:15:41.485 "name": "pt1", 00:15:41.485 "uuid": "23eaa414-0bb8-5598-ba09-eaf23339b68b", 00:15:41.485 "is_configured": true, 00:15:41.485 "data_offset": 2048, 00:15:41.485 "data_size": 63488 00:15:41.485 }, 00:15:41.485 { 00:15:41.485 "name": "pt2", 00:15:41.485 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:41.485 "is_configured": true, 00:15:41.485 "data_offset": 2048, 00:15:41.485 "data_size": 63488 00:15:41.485 }, 00:15:41.485 { 00:15:41.485 "name": "pt3", 00:15:41.485 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:41.485 "is_configured": true, 00:15:41.485 "data_offset": 2048, 00:15:41.485 "data_size": 63488 00:15:41.485 } 00:15:41.485 ] 00:15:41.485 }' 00:15:41.485 14:17:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.485 14:17:33 -- common/autotest_common.sh@10 -- # set +x 00:15:42.052 14:17:34 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:42.052 14:17:34 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:42.619 [2024-11-18 14:17:34.393891] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.619 14:17:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6ef4ff78-44ec-4faa-85bd-fa7778bb3b70 00:15:42.619 14:17:34 -- bdev/bdev_raid.sh@380 -- # '[' -z 6ef4ff78-44ec-4faa-85bd-fa7778bb3b70 ']' 00:15:42.619 14:17:34 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:42.619 [2024-11-18 14:17:34.585694] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.619 [2024-11-18 14:17:34.585719] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.619 [2024-11-18 14:17:34.585807] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.619 [2024-11-18 14:17:34.585893] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.619 [2024-11-18 14:17:34.585915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:15:42.619 14:17:34 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.619 14:17:34 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:42.891 14:17:34 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:42.891 14:17:34 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:42.891 14:17:34 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.891 14:17:34 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:43.170 14:17:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.170 14:17:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:43.464 14:17:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:43.464 14:17:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:43.723 14:17:35 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:43.723 14:17:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:43.982 14:17:35 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:43.982 14:17:35 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:43.982 14:17:35 -- common/autotest_common.sh@650 -- # local es=0 00:15:43.982 14:17:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:43.982 14:17:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.982 14:17:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.982 14:17:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.982 14:17:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.982 14:17:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.982 14:17:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.982 14:17:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:43.982 14:17:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:43.982 14:17:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:43.982 [2024-11-18 14:17:35.985926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.982 [2024-11-18 14:17:35.987992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.982 [2024-11-18 14:17:35.988056] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:43.982 [2024-11-18 14:17:35.988125] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:43.982 [2024-11-18 14:17:35.988231] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:43.982 [2024-11-18 14:17:35.988276] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:43.982 [2024-11-18 14:17:35.988335] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.982 [2024-11-18 14:17:35.988349] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:15:43.982 request: 00:15:43.982 { 00:15:43.982 "name": "raid_bdev1", 00:15:43.982 "raid_level": "raid1", 00:15:43.982 "base_bdevs": [ 00:15:43.982 "malloc1", 00:15:43.982 "malloc2", 00:15:43.982 "malloc3" 00:15:43.982 ], 00:15:43.982 "superblock": false, 00:15:43.982 "method": "bdev_raid_create", 00:15:43.982 "req_id": 1 00:15:43.982 } 00:15:43.982 Got JSON-RPC error response 00:15:43.982 response: 00:15:43.982 { 00:15:43.982 "code": -17, 00:15:43.982 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.982 } 00:15:43.982 14:17:35 -- common/autotest_common.sh@653 -- # es=1 00:15:43.982 14:17:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.982 14:17:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.982 14:17:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.982 14:17:35 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.982 14:17:35 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:44.241 14:17:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:44.241 14:17:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:44.241 14:17:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:44.499 [2024-11-18 14:17:36.365916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:44.499 [2024-11-18 14:17:36.365980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.499 [2024-11-18 14:17:36.366025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:44.499 [2024-11-18 14:17:36.366055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.499 [2024-11-18 14:17:36.368366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.499 [2024-11-18 14:17:36.368420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:44.499 [2024-11-18 14:17:36.368516] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:44.499 [2024-11-18 14:17:36.368579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:44.499 pt1 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.499 14:17:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.758 14:17:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.758 "name": "raid_bdev1", 00:15:44.758 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:44.758 "strip_size_kb": 0, 00:15:44.758 "state": "configuring", 00:15:44.758 "raid_level": "raid1", 00:15:44.758 "superblock": true, 00:15:44.758 "num_base_bdevs": 3, 00:15:44.758 "num_base_bdevs_discovered": 1, 00:15:44.758 "num_base_bdevs_operational": 3, 00:15:44.758 "base_bdevs_list": [ 00:15:44.758 { 00:15:44.758 "name": "pt1", 00:15:44.758 "uuid": "23eaa414-0bb8-5598-ba09-eaf23339b68b", 00:15:44.758 "is_configured": true, 00:15:44.758 "data_offset": 2048, 00:15:44.758 "data_size": 63488 00:15:44.758 }, 00:15:44.758 { 00:15:44.758 "name": null, 00:15:44.758 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:44.758 "is_configured": false, 00:15:44.758 "data_offset": 2048, 00:15:44.758 "data_size": 63488 00:15:44.758 }, 00:15:44.758 { 00:15:44.758 "name": null, 00:15:44.758 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:44.758 "is_configured": false, 00:15:44.758 "data_offset": 2048, 00:15:44.758 "data_size": 63488 00:15:44.758 } 00:15:44.758 ] 00:15:44.758 }' 00:15:44.758 14:17:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.758 14:17:36 -- common/autotest_common.sh@10 -- # set +x 00:15:45.325 14:17:37 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:45.325 14:17:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.583 [2024-11-18 14:17:37.406108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.583 [2024-11-18 14:17:37.406193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.583 [2024-11-18 14:17:37.406236] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:45.583 [2024-11-18 14:17:37.406278] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.583 [2024-11-18 14:17:37.406635] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.583 [2024-11-18 14:17:37.406677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.583 [2024-11-18 14:17:37.406767] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:45.583 [2024-11-18 14:17:37.406793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.583 pt2 00:15:45.583 14:17:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:45.583 [2024-11-18 14:17:37.646166] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.842 "name": "raid_bdev1", 00:15:45.842 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:45.842 "strip_size_kb": 0, 00:15:45.842 "state": "configuring", 00:15:45.842 "raid_level": "raid1", 00:15:45.842 "superblock": true, 00:15:45.842 "num_base_bdevs": 3, 00:15:45.842 "num_base_bdevs_discovered": 1, 00:15:45.842 "num_base_bdevs_operational": 3, 00:15:45.842 "base_bdevs_list": [ 00:15:45.842 { 00:15:45.842 "name": "pt1", 00:15:45.842 "uuid": "23eaa414-0bb8-5598-ba09-eaf23339b68b", 00:15:45.842 "is_configured": true, 00:15:45.842 "data_offset": 2048, 00:15:45.842 "data_size": 63488 00:15:45.842 }, 00:15:45.842 { 00:15:45.842 "name": null, 00:15:45.842 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:45.842 "is_configured": false, 00:15:45.842 "data_offset": 2048, 00:15:45.842 "data_size": 63488 00:15:45.842 }, 00:15:45.842 { 00:15:45.842 "name": null, 00:15:45.842 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:45.842 "is_configured": false, 00:15:45.842 "data_offset": 2048, 00:15:45.842 "data_size": 63488 00:15:45.842 } 00:15:45.842 ] 00:15:45.842 }' 00:15:45.842 14:17:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.842 14:17:37 -- common/autotest_common.sh@10 -- # set +x 00:15:46.409 14:17:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:46.409 14:17:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:46.409 14:17:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.668 [2024-11-18 14:17:38.714333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.668 [2024-11-18 14:17:38.714407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.668 [2024-11-18 14:17:38.714441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:46.668 [2024-11-18 14:17:38.714474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.668 [2024-11-18 14:17:38.714813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.668 [2024-11-18 14:17:38.714864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.668 [2024-11-18 14:17:38.714945] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:46.668 [2024-11-18 14:17:38.714970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.668 pt2 00:15:46.668 14:17:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:46.668 14:17:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:46.668 14:17:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:46.926 [2024-11-18 14:17:38.954396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:46.926 [2024-11-18 14:17:38.954460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.926 [2024-11-18 14:17:38.954497] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:46.926 [2024-11-18 14:17:38.954528] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.926 [2024-11-18 14:17:38.954880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.926 [2024-11-18 14:17:38.954931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:46.926 [2024-11-18 14:17:38.955022] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:46.926 [2024-11-18 14:17:38.955048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:46.926 [2024-11-18 14:17:38.955206] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:46.926 [2024-11-18 14:17:38.955235] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:46.926 [2024-11-18 14:17:38.955318] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:46.926 [2024-11-18 14:17:38.955646] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:46.926 [2024-11-18 14:17:38.955669] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:46.926 [2024-11-18 14:17:38.955769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.926 pt3 00:15:46.926 14:17:38 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.927 14:17:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.185 14:17:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.185 "name": "raid_bdev1", 00:15:47.185 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:47.185 "strip_size_kb": 0, 00:15:47.185 "state": "online", 00:15:47.185 "raid_level": "raid1", 00:15:47.185 "superblock": true, 00:15:47.185 "num_base_bdevs": 3, 00:15:47.185 "num_base_bdevs_discovered": 3, 00:15:47.185 "num_base_bdevs_operational": 3, 00:15:47.185 "base_bdevs_list": [ 00:15:47.185 { 00:15:47.185 "name": "pt1", 00:15:47.185 "uuid": "23eaa414-0bb8-5598-ba09-eaf23339b68b", 00:15:47.185 "is_configured": true, 00:15:47.185 "data_offset": 2048, 00:15:47.185 "data_size": 63488 00:15:47.185 }, 00:15:47.185 { 00:15:47.185 "name": "pt2", 00:15:47.185 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:47.185 "is_configured": true, 00:15:47.185 "data_offset": 2048, 00:15:47.185 "data_size": 63488 00:15:47.185 }, 00:15:47.185 { 00:15:47.185 "name": "pt3", 00:15:47.185 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:47.185 "is_configured": true, 00:15:47.185 "data_offset": 2048, 00:15:47.185 "data_size": 63488 00:15:47.185 } 00:15:47.185 ] 00:15:47.185 }' 00:15:47.185 14:17:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.185 14:17:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.752 14:17:39 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:47.752 14:17:39 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:48.011 [2024-11-18 14:17:40.006749] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.011 14:17:40 -- bdev/bdev_raid.sh@430 -- # '[' 6ef4ff78-44ec-4faa-85bd-fa7778bb3b70 '!=' 6ef4ff78-44ec-4faa-85bd-fa7778bb3b70 ']' 00:15:48.011 14:17:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:48.011 14:17:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:48.011 14:17:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:48.011 14:17:40 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:48.269 [2024-11-18 14:17:40.270647] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.269 14:17:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.528 14:17:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:48.528 "name": "raid_bdev1", 00:15:48.528 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:48.528 "strip_size_kb": 0, 00:15:48.528 "state": "online", 00:15:48.528 "raid_level": "raid1", 00:15:48.528 "superblock": true, 00:15:48.528 "num_base_bdevs": 3, 00:15:48.528 "num_base_bdevs_discovered": 2, 00:15:48.528 "num_base_bdevs_operational": 2, 00:15:48.528 "base_bdevs_list": [ 00:15:48.528 { 00:15:48.528 "name": null, 00:15:48.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.528 "is_configured": false, 00:15:48.528 "data_offset": 2048, 00:15:48.528 "data_size": 63488 00:15:48.528 }, 00:15:48.528 { 00:15:48.528 "name": "pt2", 00:15:48.528 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:48.528 "is_configured": true, 00:15:48.528 "data_offset": 2048, 00:15:48.529 "data_size": 63488 00:15:48.529 }, 00:15:48.529 { 00:15:48.529 "name": "pt3", 00:15:48.529 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:48.529 "is_configured": true, 00:15:48.529 "data_offset": 2048, 00:15:48.529 "data_size": 63488 00:15:48.529 } 00:15:48.529 ] 00:15:48.529 }' 00:15:48.529 14:17:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:48.529 14:17:40 -- common/autotest_common.sh@10 -- # set +x 00:15:49.096 14:17:41 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:49.355 [2024-11-18 14:17:41.266783] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.355 [2024-11-18 14:17:41.266808] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.355 [2024-11-18 14:17:41.266862] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.355 [2024-11-18 14:17:41.266916] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.355 [2024-11-18 14:17:41.266929] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:49.355 14:17:41 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.355 14:17:41 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:49.614 14:17:41 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:49.872 14:17:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:49.872 14:17:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:49.872 14:17:41 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:49.873 14:17:41 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:49.873 14:17:41 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.131 [2024-11-18 14:17:42.006883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.131 [2024-11-18 14:17:42.006943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.131 [2024-11-18 14:17:42.006986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:50.131 [2024-11-18 14:17:42.007010] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.131 [2024-11-18 14:17:42.009210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.131 [2024-11-18 14:17:42.009270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.131 [2024-11-18 14:17:42.009360] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:50.131 [2024-11-18 14:17:42.009407] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.131 pt2 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.131 14:17:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.390 14:17:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.390 "name": "raid_bdev1", 00:15:50.391 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:50.391 "strip_size_kb": 0, 00:15:50.391 "state": "configuring", 00:15:50.391 "raid_level": "raid1", 00:15:50.391 "superblock": true, 00:15:50.391 "num_base_bdevs": 3, 00:15:50.391 "num_base_bdevs_discovered": 1, 00:15:50.391 "num_base_bdevs_operational": 2, 00:15:50.391 "base_bdevs_list": [ 00:15:50.391 { 00:15:50.391 "name": null, 00:15:50.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.391 "is_configured": false, 00:15:50.391 "data_offset": 2048, 00:15:50.391 "data_size": 63488 00:15:50.391 }, 00:15:50.391 { 00:15:50.391 "name": "pt2", 00:15:50.391 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:50.391 "is_configured": true, 00:15:50.391 "data_offset": 2048, 00:15:50.391 "data_size": 63488 00:15:50.391 }, 00:15:50.391 { 00:15:50.391 "name": null, 00:15:50.391 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:50.391 "is_configured": false, 00:15:50.391 "data_offset": 2048, 00:15:50.391 "data_size": 63488 00:15:50.391 } 00:15:50.391 ] 00:15:50.391 }' 00:15:50.391 14:17:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.391 14:17:42 -- common/autotest_common.sh@10 -- # set +x 00:15:50.959 14:17:42 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:15:50.959 14:17:42 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:50.959 14:17:42 -- bdev/bdev_raid.sh@462 -- # i=2 00:15:50.959 14:17:42 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.218 [2024-11-18 14:17:43.111092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.218 [2024-11-18 14:17:43.111169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.218 [2024-11-18 14:17:43.111211] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:51.218 [2024-11-18 14:17:43.111236] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.218 [2024-11-18 14:17:43.111594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.218 [2024-11-18 14:17:43.111642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.218 [2024-11-18 14:17:43.111729] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:51.218 [2024-11-18 14:17:43.111754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.218 [2024-11-18 14:17:43.111844] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:15:51.218 [2024-11-18 14:17:43.111861] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.218 [2024-11-18 14:17:43.111920] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:15:51.218 [2024-11-18 14:17:43.112223] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:15:51.218 [2024-11-18 14:17:43.112248] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:15:51.218 [2024-11-18 14:17:43.112340] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.218 pt3 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.218 14:17:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.477 14:17:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.477 "name": "raid_bdev1", 00:15:51.477 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:51.477 "strip_size_kb": 0, 00:15:51.477 "state": "online", 00:15:51.477 "raid_level": "raid1", 00:15:51.477 "superblock": true, 00:15:51.477 "num_base_bdevs": 3, 00:15:51.477 "num_base_bdevs_discovered": 2, 00:15:51.477 "num_base_bdevs_operational": 2, 00:15:51.477 "base_bdevs_list": [ 00:15:51.477 { 00:15:51.477 "name": null, 00:15:51.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.477 "is_configured": false, 00:15:51.477 "data_offset": 2048, 00:15:51.477 "data_size": 63488 00:15:51.477 }, 00:15:51.477 { 00:15:51.477 "name": "pt2", 00:15:51.477 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:51.477 "is_configured": true, 00:15:51.477 "data_offset": 2048, 00:15:51.477 "data_size": 63488 00:15:51.477 }, 00:15:51.477 { 00:15:51.477 "name": "pt3", 00:15:51.477 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:51.477 "is_configured": true, 00:15:51.477 "data_offset": 2048, 00:15:51.477 "data_size": 63488 00:15:51.477 } 00:15:51.477 ] 00:15:51.477 }' 00:15:51.477 14:17:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.477 14:17:43 -- common/autotest_common.sh@10 -- # set +x 00:15:52.150 14:17:43 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:15:52.150 14:17:43 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:52.150 [2024-11-18 14:17:44.203274] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:52.150 [2024-11-18 14:17:44.203300] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.150 [2024-11-18 14:17:44.203343] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.150 [2024-11-18 14:17:44.203392] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.150 [2024-11-18 14:17:44.203404] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:15:52.409 14:17:44 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:15:52.409 14:17:44 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.409 14:17:44 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:15:52.409 14:17:44 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:15:52.409 14:17:44 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:52.668 [2024-11-18 14:17:44.635333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:52.668 [2024-11-18 14:17:44.635391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.668 [2024-11-18 14:17:44.635432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:52.668 [2024-11-18 14:17:44.635456] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.668 [2024-11-18 14:17:44.637687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.668 [2024-11-18 14:17:44.637741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:52.668 [2024-11-18 14:17:44.637833] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:52.668 [2024-11-18 14:17:44.637875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.668 pt1 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.668 14:17:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.926 14:17:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.926 "name": "raid_bdev1", 00:15:52.926 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:52.926 "strip_size_kb": 0, 00:15:52.926 "state": "configuring", 00:15:52.926 "raid_level": "raid1", 00:15:52.926 "superblock": true, 00:15:52.926 "num_base_bdevs": 3, 00:15:52.926 "num_base_bdevs_discovered": 1, 00:15:52.926 "num_base_bdevs_operational": 3, 00:15:52.926 "base_bdevs_list": [ 00:15:52.926 { 00:15:52.926 "name": "pt1", 00:15:52.926 "uuid": "23eaa414-0bb8-5598-ba09-eaf23339b68b", 00:15:52.926 "is_configured": true, 00:15:52.926 "data_offset": 2048, 00:15:52.926 "data_size": 63488 00:15:52.926 }, 00:15:52.926 { 00:15:52.926 "name": null, 00:15:52.926 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:52.926 "is_configured": false, 00:15:52.926 "data_offset": 2048, 00:15:52.926 "data_size": 63488 00:15:52.926 }, 00:15:52.926 { 00:15:52.926 "name": null, 00:15:52.926 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:52.926 "is_configured": false, 00:15:52.926 "data_offset": 2048, 00:15:52.926 "data_size": 63488 00:15:52.926 } 00:15:52.926 ] 00:15:52.926 }' 00:15:52.926 14:17:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.926 14:17:44 -- common/autotest_common.sh@10 -- # set +x 00:15:53.492 14:17:45 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:15:53.492 14:17:45 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:15:53.492 14:17:45 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@489 -- # i=2 00:15:53.750 14:17:45 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.009 [2024-11-18 14:17:45.907544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.009 [2024-11-18 14:17:45.907611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.009 [2024-11-18 14:17:45.907643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:54.009 [2024-11-18 14:17:45.907673] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.009 [2024-11-18 14:17:45.908021] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.009 [2024-11-18 14:17:45.908072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.009 [2024-11-18 14:17:45.908154] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:54.009 [2024-11-18 14:17:45.908170] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:54.009 [2024-11-18 14:17:45.908177] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.009 [2024-11-18 14:17:45.908206] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:15:54.009 [2024-11-18 14:17:45.908256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.009 pt3 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.009 14:17:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.268 14:17:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.268 "name": "raid_bdev1", 00:15:54.268 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:54.268 "strip_size_kb": 0, 00:15:54.268 "state": "configuring", 00:15:54.268 "raid_level": "raid1", 00:15:54.268 "superblock": true, 00:15:54.268 "num_base_bdevs": 3, 00:15:54.268 "num_base_bdevs_discovered": 1, 00:15:54.268 "num_base_bdevs_operational": 2, 00:15:54.268 "base_bdevs_list": [ 00:15:54.268 { 00:15:54.268 "name": null, 00:15:54.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.268 "is_configured": false, 00:15:54.268 "data_offset": 2048, 00:15:54.268 "data_size": 63488 00:15:54.268 }, 00:15:54.268 { 00:15:54.268 "name": null, 00:15:54.268 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:54.268 "is_configured": false, 00:15:54.268 "data_offset": 2048, 00:15:54.268 "data_size": 63488 00:15:54.268 }, 00:15:54.268 { 00:15:54.268 "name": "pt3", 00:15:54.268 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:54.268 "is_configured": true, 00:15:54.268 "data_offset": 2048, 00:15:54.268 "data_size": 63488 00:15:54.268 } 00:15:54.268 ] 00:15:54.268 }' 00:15:54.268 14:17:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.268 14:17:46 -- common/autotest_common.sh@10 -- # set +x 00:15:54.836 14:17:46 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:15:54.836 14:17:46 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:15:54.836 14:17:46 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.095 [2024-11-18 14:17:46.943726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.095 [2024-11-18 14:17:46.943793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.095 [2024-11-18 14:17:46.943826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:55.095 [2024-11-18 14:17:46.943857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.095 [2024-11-18 14:17:46.944197] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.095 [2024-11-18 14:17:46.944247] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.095 [2024-11-18 14:17:46.944317] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:55.095 [2024-11-18 14:17:46.944350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.095 [2024-11-18 14:17:46.944444] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:15:55.095 [2024-11-18 14:17:46.944466] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.095 [2024-11-18 14:17:46.944526] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:15:55.095 [2024-11-18 14:17:46.944860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:15:55.095 [2024-11-18 14:17:46.944885] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:15:55.095 [2024-11-18 14:17:46.944994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.095 pt2 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.095 14:17:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.095 14:17:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.095 "name": "raid_bdev1", 00:15:55.095 "uuid": "6ef4ff78-44ec-4faa-85bd-fa7778bb3b70", 00:15:55.095 "strip_size_kb": 0, 00:15:55.095 "state": "online", 00:15:55.095 "raid_level": "raid1", 00:15:55.095 "superblock": true, 00:15:55.095 "num_base_bdevs": 3, 00:15:55.095 "num_base_bdevs_discovered": 2, 00:15:55.095 "num_base_bdevs_operational": 2, 00:15:55.095 "base_bdevs_list": [ 00:15:55.095 { 00:15:55.095 "name": null, 00:15:55.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.095 "is_configured": false, 00:15:55.095 "data_offset": 2048, 00:15:55.095 "data_size": 63488 00:15:55.095 }, 00:15:55.095 { 00:15:55.095 "name": "pt2", 00:15:55.095 "uuid": "e4259d03-f1df-532b-b195-aa101c5ccf95", 00:15:55.095 "is_configured": true, 00:15:55.095 "data_offset": 2048, 00:15:55.095 "data_size": 63488 00:15:55.095 }, 00:15:55.095 { 00:15:55.095 "name": "pt3", 00:15:55.095 "uuid": "6a4541df-9b3f-5cfa-892a-e140df538cb8", 00:15:55.096 "is_configured": true, 00:15:55.096 "data_offset": 2048, 00:15:55.096 "data_size": 63488 00:15:55.096 } 00:15:55.096 ] 00:15:55.096 }' 00:15:55.096 14:17:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.096 14:17:47 -- common/autotest_common.sh@10 -- # set +x 00:15:56.031 14:17:47 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.031 14:17:47 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:56.031 [2024-11-18 14:17:47.968043] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.031 14:17:47 -- bdev/bdev_raid.sh@506 -- # '[' 6ef4ff78-44ec-4faa-85bd-fa7778bb3b70 '!=' 6ef4ff78-44ec-4faa-85bd-fa7778bb3b70 ']' 00:15:56.031 14:17:47 -- bdev/bdev_raid.sh@511 -- # killprocess 127965 00:15:56.031 14:17:47 -- common/autotest_common.sh@936 -- # '[' -z 127965 ']' 00:15:56.031 14:17:47 -- common/autotest_common.sh@940 -- # kill -0 127965 00:15:56.031 14:17:47 -- common/autotest_common.sh@941 -- # uname 00:15:56.031 14:17:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:56.031 14:17:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127965 00:15:56.031 14:17:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.031 14:17:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.031 14:17:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127965' 00:15:56.031 killing process with pid 127965 00:15:56.031 14:17:48 -- common/autotest_common.sh@955 -- # kill 127965 00:15:56.031 [2024-11-18 14:17:48.010819] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.031 14:17:48 -- common/autotest_common.sh@960 -- # wait 127965 00:15:56.031 [2024-11-18 14:17:48.011078] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.031 [2024-11-18 14:17:48.011359] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.031 [2024-11-18 14:17:48.011491] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:15:56.031 [2024-11-18 14:17:48.049409] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.290 14:17:48 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:56.290 00:15:56.290 real 0m17.483s 00:15:56.290 user 0m33.056s 00:15:56.290 sys 0m1.954s 00:15:56.290 14:17:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:56.290 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:15:56.290 ************************************ 00:15:56.290 END TEST raid_superblock_test 00:15:56.290 ************************************ 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:15:56.549 14:17:48 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:56.549 14:17:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:56.549 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:15:56.549 ************************************ 00:15:56.549 START TEST raid_state_function_test 00:15:56.549 ************************************ 00:15:56.549 14:17:48 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=128557 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:56.549 Process raid pid: 128557 00:15:56.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128557' 00:15:56.549 14:17:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128557 /var/tmp/spdk-raid.sock 00:15:56.549 14:17:48 -- common/autotest_common.sh@829 -- # '[' -z 128557 ']' 00:15:56.549 14:17:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.549 14:17:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.549 14:17:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.549 14:17:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.549 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:15:56.549 [2024-11-18 14:17:48.473150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:56.549 [2024-11-18 14:17:48.473581] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.549 [2024-11-18 14:17:48.619917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.808 [2024-11-18 14:17:48.688548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.808 [2024-11-18 14:17:48.758997] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.375 14:17:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.375 14:17:49 -- common/autotest_common.sh@862 -- # return 0 00:15:57.375 14:17:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:57.633 [2024-11-18 14:17:49.597634] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.633 [2024-11-18 14:17:49.597933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.633 [2024-11-18 14:17:49.598105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.633 [2024-11-18 14:17:49.598253] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.633 [2024-11-18 14:17:49.598373] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.633 [2024-11-18 14:17:49.598463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.633 [2024-11-18 14:17:49.598504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:57.634 [2024-11-18 14:17:49.598565] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.634 14:17:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.892 14:17:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.892 "name": "Existed_Raid", 00:15:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.892 "strip_size_kb": 64, 00:15:57.892 "state": "configuring", 00:15:57.892 "raid_level": "raid0", 00:15:57.892 "superblock": false, 00:15:57.892 "num_base_bdevs": 4, 00:15:57.892 "num_base_bdevs_discovered": 0, 00:15:57.892 "num_base_bdevs_operational": 4, 00:15:57.892 "base_bdevs_list": [ 00:15:57.892 { 00:15:57.892 "name": "BaseBdev1", 00:15:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.892 "is_configured": false, 00:15:57.892 "data_offset": 0, 00:15:57.892 "data_size": 0 00:15:57.892 }, 00:15:57.892 { 00:15:57.892 "name": "BaseBdev2", 00:15:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.892 "is_configured": false, 00:15:57.892 "data_offset": 0, 00:15:57.892 "data_size": 0 00:15:57.892 }, 00:15:57.892 { 00:15:57.892 "name": "BaseBdev3", 00:15:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.892 "is_configured": false, 00:15:57.892 "data_offset": 0, 00:15:57.892 "data_size": 0 00:15:57.892 }, 00:15:57.892 { 00:15:57.892 "name": "BaseBdev4", 00:15:57.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.892 "is_configured": false, 00:15:57.892 "data_offset": 0, 00:15:57.892 "data_size": 0 00:15:57.892 } 00:15:57.892 ] 00:15:57.892 }' 00:15:57.892 14:17:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.892 14:17:49 -- common/autotest_common.sh@10 -- # set +x 00:15:58.458 14:17:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:58.716 [2024-11-18 14:17:50.681674] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.716 [2024-11-18 14:17:50.681870] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:58.716 14:17:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:58.975 [2024-11-18 14:17:50.913712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.975 [2024-11-18 14:17:50.913904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.975 [2024-11-18 14:17:50.914050] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.975 [2024-11-18 14:17:50.914232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.975 [2024-11-18 14:17:50.914336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.975 [2024-11-18 14:17:50.914400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.975 [2024-11-18 14:17:50.914436] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:58.975 [2024-11-18 14:17:50.914494] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:58.975 14:17:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:59.234 [2024-11-18 14:17:51.115691] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.234 BaseBdev1 00:15:59.234 14:17:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:59.234 14:17:51 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:59.234 14:17:51 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:59.234 14:17:51 -- common/autotest_common.sh@899 -- # local i 00:15:59.234 14:17:51 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:59.234 14:17:51 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:59.234 14:17:51 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.493 14:17:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.493 [ 00:15:59.493 { 00:15:59.493 "name": "BaseBdev1", 00:15:59.493 "aliases": [ 00:15:59.493 "a7083ca4-abe1-4d61-99f6-bf184feed913" 00:15:59.493 ], 00:15:59.493 "product_name": "Malloc disk", 00:15:59.493 "block_size": 512, 00:15:59.493 "num_blocks": 65536, 00:15:59.493 "uuid": "a7083ca4-abe1-4d61-99f6-bf184feed913", 00:15:59.493 "assigned_rate_limits": { 00:15:59.493 "rw_ios_per_sec": 0, 00:15:59.493 "rw_mbytes_per_sec": 0, 00:15:59.493 "r_mbytes_per_sec": 0, 00:15:59.493 "w_mbytes_per_sec": 0 00:15:59.493 }, 00:15:59.493 "claimed": true, 00:15:59.493 "claim_type": "exclusive_write", 00:15:59.493 "zoned": false, 00:15:59.493 "supported_io_types": { 00:15:59.493 "read": true, 00:15:59.493 "write": true, 00:15:59.493 "unmap": true, 00:15:59.493 "write_zeroes": true, 00:15:59.493 "flush": true, 00:15:59.493 "reset": true, 00:15:59.493 "compare": false, 00:15:59.493 "compare_and_write": false, 00:15:59.493 "abort": true, 00:15:59.493 "nvme_admin": false, 00:15:59.493 "nvme_io": false 00:15:59.493 }, 00:15:59.493 "memory_domains": [ 00:15:59.493 { 00:15:59.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.493 "dma_device_type": 2 00:15:59.493 } 00:15:59.493 ], 00:15:59.493 "driver_specific": {} 00:15:59.493 } 00:15:59.493 ] 00:15:59.493 14:17:51 -- common/autotest_common.sh@905 -- # return 0 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.493 14:17:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.753 14:17:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.753 "name": "Existed_Raid", 00:15:59.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.753 "strip_size_kb": 64, 00:15:59.753 "state": "configuring", 00:15:59.753 "raid_level": "raid0", 00:15:59.753 "superblock": false, 00:15:59.753 "num_base_bdevs": 4, 00:15:59.753 "num_base_bdevs_discovered": 1, 00:15:59.753 "num_base_bdevs_operational": 4, 00:15:59.753 "base_bdevs_list": [ 00:15:59.753 { 00:15:59.753 "name": "BaseBdev1", 00:15:59.753 "uuid": "a7083ca4-abe1-4d61-99f6-bf184feed913", 00:15:59.753 "is_configured": true, 00:15:59.753 "data_offset": 0, 00:15:59.753 "data_size": 65536 00:15:59.753 }, 00:15:59.753 { 00:15:59.753 "name": "BaseBdev2", 00:15:59.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.753 "is_configured": false, 00:15:59.753 "data_offset": 0, 00:15:59.753 "data_size": 0 00:15:59.753 }, 00:15:59.753 { 00:15:59.753 "name": "BaseBdev3", 00:15:59.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.753 "is_configured": false, 00:15:59.753 "data_offset": 0, 00:15:59.753 "data_size": 0 00:15:59.753 }, 00:15:59.753 { 00:15:59.753 "name": "BaseBdev4", 00:15:59.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.753 "is_configured": false, 00:15:59.753 "data_offset": 0, 00:15:59.753 "data_size": 0 00:15:59.753 } 00:15:59.753 ] 00:15:59.753 }' 00:15:59.753 14:17:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.753 14:17:51 -- common/autotest_common.sh@10 -- # set +x 00:16:00.322 14:17:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.581 [2024-11-18 14:17:52.547916] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.581 [2024-11-18 14:17:52.548109] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:00.581 14:17:52 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:00.581 14:17:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:00.840 [2024-11-18 14:17:52.736023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.840 [2024-11-18 14:17:52.738176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.840 [2024-11-18 14:17:52.738372] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.840 [2024-11-18 14:17:52.738490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.840 [2024-11-18 14:17:52.738562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.840 [2024-11-18 14:17:52.738660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.840 [2024-11-18 14:17:52.738726] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:00.840 14:17:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.841 14:17:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.841 14:17:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.841 14:17:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.841 14:17:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.841 14:17:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.100 14:17:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.100 "name": "Existed_Raid", 00:16:01.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.100 "strip_size_kb": 64, 00:16:01.100 "state": "configuring", 00:16:01.100 "raid_level": "raid0", 00:16:01.100 "superblock": false, 00:16:01.100 "num_base_bdevs": 4, 00:16:01.100 "num_base_bdevs_discovered": 1, 00:16:01.100 "num_base_bdevs_operational": 4, 00:16:01.100 "base_bdevs_list": [ 00:16:01.100 { 00:16:01.100 "name": "BaseBdev1", 00:16:01.100 "uuid": "a7083ca4-abe1-4d61-99f6-bf184feed913", 00:16:01.100 "is_configured": true, 00:16:01.100 "data_offset": 0, 00:16:01.100 "data_size": 65536 00:16:01.100 }, 00:16:01.100 { 00:16:01.100 "name": "BaseBdev2", 00:16:01.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.100 "is_configured": false, 00:16:01.100 "data_offset": 0, 00:16:01.100 "data_size": 0 00:16:01.100 }, 00:16:01.100 { 00:16:01.100 "name": "BaseBdev3", 00:16:01.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.100 "is_configured": false, 00:16:01.100 "data_offset": 0, 00:16:01.100 "data_size": 0 00:16:01.100 }, 00:16:01.100 { 00:16:01.100 "name": "BaseBdev4", 00:16:01.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.100 "is_configured": false, 00:16:01.100 "data_offset": 0, 00:16:01.100 "data_size": 0 00:16:01.100 } 00:16:01.100 ] 00:16:01.100 }' 00:16:01.100 14:17:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.100 14:17:52 -- common/autotest_common.sh@10 -- # set +x 00:16:01.668 14:17:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.928 [2024-11-18 14:17:53.851815] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.928 BaseBdev2 00:16:01.928 14:17:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:01.928 14:17:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:01.928 14:17:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.928 14:17:53 -- common/autotest_common.sh@899 -- # local i 00:16:01.928 14:17:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.928 14:17:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.928 14:17:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.187 14:17:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:02.447 [ 00:16:02.447 { 00:16:02.447 "name": "BaseBdev2", 00:16:02.447 "aliases": [ 00:16:02.447 "1e3e1742-221a-4de2-9bb7-d0aec5f4c775" 00:16:02.447 ], 00:16:02.447 "product_name": "Malloc disk", 00:16:02.447 "block_size": 512, 00:16:02.447 "num_blocks": 65536, 00:16:02.447 "uuid": "1e3e1742-221a-4de2-9bb7-d0aec5f4c775", 00:16:02.447 "assigned_rate_limits": { 00:16:02.447 "rw_ios_per_sec": 0, 00:16:02.447 "rw_mbytes_per_sec": 0, 00:16:02.447 "r_mbytes_per_sec": 0, 00:16:02.447 "w_mbytes_per_sec": 0 00:16:02.447 }, 00:16:02.447 "claimed": true, 00:16:02.447 "claim_type": "exclusive_write", 00:16:02.447 "zoned": false, 00:16:02.447 "supported_io_types": { 00:16:02.447 "read": true, 00:16:02.447 "write": true, 00:16:02.447 "unmap": true, 00:16:02.447 "write_zeroes": true, 00:16:02.447 "flush": true, 00:16:02.447 "reset": true, 00:16:02.447 "compare": false, 00:16:02.447 "compare_and_write": false, 00:16:02.447 "abort": true, 00:16:02.447 "nvme_admin": false, 00:16:02.447 "nvme_io": false 00:16:02.447 }, 00:16:02.447 "memory_domains": [ 00:16:02.447 { 00:16:02.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.447 "dma_device_type": 2 00:16:02.447 } 00:16:02.447 ], 00:16:02.447 "driver_specific": {} 00:16:02.447 } 00:16:02.447 ] 00:16:02.447 14:17:54 -- common/autotest_common.sh@905 -- # return 0 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.447 14:17:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.706 14:17:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.706 "name": "Existed_Raid", 00:16:02.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.706 "strip_size_kb": 64, 00:16:02.706 "state": "configuring", 00:16:02.706 "raid_level": "raid0", 00:16:02.706 "superblock": false, 00:16:02.706 "num_base_bdevs": 4, 00:16:02.706 "num_base_bdevs_discovered": 2, 00:16:02.706 "num_base_bdevs_operational": 4, 00:16:02.706 "base_bdevs_list": [ 00:16:02.706 { 00:16:02.706 "name": "BaseBdev1", 00:16:02.706 "uuid": "a7083ca4-abe1-4d61-99f6-bf184feed913", 00:16:02.706 "is_configured": true, 00:16:02.706 "data_offset": 0, 00:16:02.706 "data_size": 65536 00:16:02.706 }, 00:16:02.706 { 00:16:02.706 "name": "BaseBdev2", 00:16:02.706 "uuid": "1e3e1742-221a-4de2-9bb7-d0aec5f4c775", 00:16:02.706 "is_configured": true, 00:16:02.706 "data_offset": 0, 00:16:02.706 "data_size": 65536 00:16:02.706 }, 00:16:02.706 { 00:16:02.706 "name": "BaseBdev3", 00:16:02.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.706 "is_configured": false, 00:16:02.706 "data_offset": 0, 00:16:02.706 "data_size": 0 00:16:02.706 }, 00:16:02.706 { 00:16:02.706 "name": "BaseBdev4", 00:16:02.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.706 "is_configured": false, 00:16:02.706 "data_offset": 0, 00:16:02.706 "data_size": 0 00:16:02.706 } 00:16:02.706 ] 00:16:02.706 }' 00:16:02.706 14:17:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.706 14:17:54 -- common/autotest_common.sh@10 -- # set +x 00:16:03.349 14:17:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:03.349 [2024-11-18 14:17:55.327803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.349 BaseBdev3 00:16:03.349 14:17:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:03.349 14:17:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:03.349 14:17:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.349 14:17:55 -- common/autotest_common.sh@899 -- # local i 00:16:03.349 14:17:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.349 14:17:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.349 14:17:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.608 14:17:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:03.866 [ 00:16:03.866 { 00:16:03.866 "name": "BaseBdev3", 00:16:03.866 "aliases": [ 00:16:03.866 "9def12ee-5aeb-4f4c-bfe2-36b2b8833e2b" 00:16:03.866 ], 00:16:03.866 "product_name": "Malloc disk", 00:16:03.866 "block_size": 512, 00:16:03.866 "num_blocks": 65536, 00:16:03.866 "uuid": "9def12ee-5aeb-4f4c-bfe2-36b2b8833e2b", 00:16:03.866 "assigned_rate_limits": { 00:16:03.866 "rw_ios_per_sec": 0, 00:16:03.866 "rw_mbytes_per_sec": 0, 00:16:03.866 "r_mbytes_per_sec": 0, 00:16:03.866 "w_mbytes_per_sec": 0 00:16:03.866 }, 00:16:03.866 "claimed": true, 00:16:03.866 "claim_type": "exclusive_write", 00:16:03.866 "zoned": false, 00:16:03.866 "supported_io_types": { 00:16:03.866 "read": true, 00:16:03.866 "write": true, 00:16:03.866 "unmap": true, 00:16:03.866 "write_zeroes": true, 00:16:03.866 "flush": true, 00:16:03.866 "reset": true, 00:16:03.866 "compare": false, 00:16:03.866 "compare_and_write": false, 00:16:03.866 "abort": true, 00:16:03.866 "nvme_admin": false, 00:16:03.866 "nvme_io": false 00:16:03.866 }, 00:16:03.866 "memory_domains": [ 00:16:03.866 { 00:16:03.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.866 "dma_device_type": 2 00:16:03.866 } 00:16:03.866 ], 00:16:03.866 "driver_specific": {} 00:16:03.866 } 00:16:03.866 ] 00:16:03.866 14:17:55 -- common/autotest_common.sh@905 -- # return 0 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:03.866 14:17:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.867 14:17:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.867 14:17:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.867 14:17:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.867 14:17:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.867 14:17:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.125 14:17:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.125 "name": "Existed_Raid", 00:16:04.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.125 "strip_size_kb": 64, 00:16:04.125 "state": "configuring", 00:16:04.125 "raid_level": "raid0", 00:16:04.125 "superblock": false, 00:16:04.125 "num_base_bdevs": 4, 00:16:04.125 "num_base_bdevs_discovered": 3, 00:16:04.125 "num_base_bdevs_operational": 4, 00:16:04.125 "base_bdevs_list": [ 00:16:04.125 { 00:16:04.125 "name": "BaseBdev1", 00:16:04.125 "uuid": "a7083ca4-abe1-4d61-99f6-bf184feed913", 00:16:04.125 "is_configured": true, 00:16:04.125 "data_offset": 0, 00:16:04.125 "data_size": 65536 00:16:04.125 }, 00:16:04.125 { 00:16:04.125 "name": "BaseBdev2", 00:16:04.125 "uuid": "1e3e1742-221a-4de2-9bb7-d0aec5f4c775", 00:16:04.125 "is_configured": true, 00:16:04.125 "data_offset": 0, 00:16:04.125 "data_size": 65536 00:16:04.125 }, 00:16:04.125 { 00:16:04.125 "name": "BaseBdev3", 00:16:04.125 "uuid": "9def12ee-5aeb-4f4c-bfe2-36b2b8833e2b", 00:16:04.125 "is_configured": true, 00:16:04.125 "data_offset": 0, 00:16:04.125 "data_size": 65536 00:16:04.125 }, 00:16:04.125 { 00:16:04.125 "name": "BaseBdev4", 00:16:04.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.125 "is_configured": false, 00:16:04.125 "data_offset": 0, 00:16:04.125 "data_size": 0 00:16:04.125 } 00:16:04.125 ] 00:16:04.125 }' 00:16:04.125 14:17:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.125 14:17:55 -- common/autotest_common.sh@10 -- # set +x 00:16:04.693 14:17:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:04.693 [2024-11-18 14:17:56.691699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.693 [2024-11-18 14:17:56.691882] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:04.693 [2024-11-18 14:17:56.691933] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:04.693 [2024-11-18 14:17:56.692211] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:04.693 [2024-11-18 14:17:56.692773] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:04.693 [2024-11-18 14:17:56.692911] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:04.693 [2024-11-18 14:17:56.693265] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.693 BaseBdev4 00:16:04.693 14:17:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:04.693 14:17:56 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:04.693 14:17:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:04.693 14:17:56 -- common/autotest_common.sh@899 -- # local i 00:16:04.693 14:17:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:04.693 14:17:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:04.693 14:17:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.951 14:17:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.210 [ 00:16:05.210 { 00:16:05.210 "name": "BaseBdev4", 00:16:05.210 "aliases": [ 00:16:05.210 "51f848eb-bed8-48a4-92e5-5d8c71a0f295" 00:16:05.210 ], 00:16:05.210 "product_name": "Malloc disk", 00:16:05.210 "block_size": 512, 00:16:05.210 "num_blocks": 65536, 00:16:05.210 "uuid": "51f848eb-bed8-48a4-92e5-5d8c71a0f295", 00:16:05.210 "assigned_rate_limits": { 00:16:05.210 "rw_ios_per_sec": 0, 00:16:05.210 "rw_mbytes_per_sec": 0, 00:16:05.210 "r_mbytes_per_sec": 0, 00:16:05.210 "w_mbytes_per_sec": 0 00:16:05.210 }, 00:16:05.210 "claimed": true, 00:16:05.210 "claim_type": "exclusive_write", 00:16:05.210 "zoned": false, 00:16:05.210 "supported_io_types": { 00:16:05.210 "read": true, 00:16:05.210 "write": true, 00:16:05.210 "unmap": true, 00:16:05.210 "write_zeroes": true, 00:16:05.210 "flush": true, 00:16:05.210 "reset": true, 00:16:05.210 "compare": false, 00:16:05.210 "compare_and_write": false, 00:16:05.210 "abort": true, 00:16:05.210 "nvme_admin": false, 00:16:05.210 "nvme_io": false 00:16:05.210 }, 00:16:05.210 "memory_domains": [ 00:16:05.210 { 00:16:05.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.210 "dma_device_type": 2 00:16:05.210 } 00:16:05.210 ], 00:16:05.210 "driver_specific": {} 00:16:05.210 } 00:16:05.210 ] 00:16:05.210 14:17:57 -- common/autotest_common.sh@905 -- # return 0 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.210 14:17:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.469 14:17:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.469 "name": "Existed_Raid", 00:16:05.469 "uuid": "c29f457a-efef-4fb6-8004-2b752c3b6467", 00:16:05.469 "strip_size_kb": 64, 00:16:05.469 "state": "online", 00:16:05.469 "raid_level": "raid0", 00:16:05.469 "superblock": false, 00:16:05.469 "num_base_bdevs": 4, 00:16:05.469 "num_base_bdevs_discovered": 4, 00:16:05.469 "num_base_bdevs_operational": 4, 00:16:05.469 "base_bdevs_list": [ 00:16:05.469 { 00:16:05.469 "name": "BaseBdev1", 00:16:05.469 "uuid": "a7083ca4-abe1-4d61-99f6-bf184feed913", 00:16:05.469 "is_configured": true, 00:16:05.469 "data_offset": 0, 00:16:05.469 "data_size": 65536 00:16:05.469 }, 00:16:05.469 { 00:16:05.469 "name": "BaseBdev2", 00:16:05.469 "uuid": "1e3e1742-221a-4de2-9bb7-d0aec5f4c775", 00:16:05.469 "is_configured": true, 00:16:05.469 "data_offset": 0, 00:16:05.469 "data_size": 65536 00:16:05.469 }, 00:16:05.469 { 00:16:05.469 "name": "BaseBdev3", 00:16:05.469 "uuid": "9def12ee-5aeb-4f4c-bfe2-36b2b8833e2b", 00:16:05.469 "is_configured": true, 00:16:05.469 "data_offset": 0, 00:16:05.469 "data_size": 65536 00:16:05.469 }, 00:16:05.469 { 00:16:05.469 "name": "BaseBdev4", 00:16:05.469 "uuid": "51f848eb-bed8-48a4-92e5-5d8c71a0f295", 00:16:05.469 "is_configured": true, 00:16:05.469 "data_offset": 0, 00:16:05.469 "data_size": 65536 00:16:05.469 } 00:16:05.469 ] 00:16:05.469 }' 00:16:05.469 14:17:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.469 14:17:57 -- common/autotest_common.sh@10 -- # set +x 00:16:06.036 14:17:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:06.295 [2024-11-18 14:17:58.112097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:06.295 [2024-11-18 14:17:58.112257] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.295 [2024-11-18 14:17:58.112450] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.295 "name": "Existed_Raid", 00:16:06.295 "uuid": "c29f457a-efef-4fb6-8004-2b752c3b6467", 00:16:06.295 "strip_size_kb": 64, 00:16:06.295 "state": "offline", 00:16:06.295 "raid_level": "raid0", 00:16:06.295 "superblock": false, 00:16:06.295 "num_base_bdevs": 4, 00:16:06.295 "num_base_bdevs_discovered": 3, 00:16:06.295 "num_base_bdevs_operational": 3, 00:16:06.295 "base_bdevs_list": [ 00:16:06.295 { 00:16:06.295 "name": null, 00:16:06.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.295 "is_configured": false, 00:16:06.295 "data_offset": 0, 00:16:06.295 "data_size": 65536 00:16:06.295 }, 00:16:06.295 { 00:16:06.295 "name": "BaseBdev2", 00:16:06.295 "uuid": "1e3e1742-221a-4de2-9bb7-d0aec5f4c775", 00:16:06.295 "is_configured": true, 00:16:06.295 "data_offset": 0, 00:16:06.295 "data_size": 65536 00:16:06.295 }, 00:16:06.295 { 00:16:06.295 "name": "BaseBdev3", 00:16:06.295 "uuid": "9def12ee-5aeb-4f4c-bfe2-36b2b8833e2b", 00:16:06.295 "is_configured": true, 00:16:06.295 "data_offset": 0, 00:16:06.295 "data_size": 65536 00:16:06.295 }, 00:16:06.295 { 00:16:06.295 "name": "BaseBdev4", 00:16:06.295 "uuid": "51f848eb-bed8-48a4-92e5-5d8c71a0f295", 00:16:06.295 "is_configured": true, 00:16:06.295 "data_offset": 0, 00:16:06.295 "data_size": 65536 00:16:06.295 } 00:16:06.295 ] 00:16:06.295 }' 00:16:06.295 14:17:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.295 14:17:58 -- common/autotest_common.sh@10 -- # set +x 00:16:06.862 14:17:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:06.862 14:17:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:06.862 14:17:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.862 14:17:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:07.120 14:17:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:07.120 14:17:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.120 14:17:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:07.378 [2024-11-18 14:17:59.273059] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.378 14:17:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:07.378 14:17:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:07.378 14:17:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.378 14:17:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:07.637 14:17:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:07.637 14:17:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.637 14:17:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:07.895 [2024-11-18 14:17:59.722334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:07.895 14:17:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:07.895 14:17:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:07.895 14:17:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.895 14:17:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:08.154 14:17:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:08.154 14:17:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.154 14:17:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:08.154 [2024-11-18 14:18:00.195523] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:08.154 [2024-11-18 14:18:00.195724] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:08.154 14:18:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:08.154 14:18:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:08.154 14:18:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.154 14:18:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:08.413 14:18:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:08.413 14:18:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:08.413 14:18:00 -- bdev/bdev_raid.sh@287 -- # killprocess 128557 00:16:08.413 14:18:00 -- common/autotest_common.sh@936 -- # '[' -z 128557 ']' 00:16:08.413 14:18:00 -- common/autotest_common.sh@940 -- # kill -0 128557 00:16:08.413 14:18:00 -- common/autotest_common.sh@941 -- # uname 00:16:08.413 14:18:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.413 14:18:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128557 00:16:08.413 killing process with pid 128557 00:16:08.413 14:18:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:08.413 14:18:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:08.413 14:18:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128557' 00:16:08.413 14:18:00 -- common/autotest_common.sh@955 -- # kill 128557 00:16:08.413 [2024-11-18 14:18:00.432533] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.413 14:18:00 -- common/autotest_common.sh@960 -- # wait 128557 00:16:08.413 [2024-11-18 14:18:00.432632] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.671 14:18:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:08.671 00:16:08.671 real 0m12.325s 00:16:08.671 user 0m22.740s 00:16:08.671 ************************************ 00:16:08.671 END TEST raid_state_function_test 00:16:08.671 ************************************ 00:16:08.671 sys 0m1.472s 00:16:08.671 14:18:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:08.671 14:18:00 -- common/autotest_common.sh@10 -- # set +x 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:16:08.930 14:18:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:08.930 14:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:08.930 14:18:00 -- common/autotest_common.sh@10 -- # set +x 00:16:08.930 ************************************ 00:16:08.930 START TEST raid_state_function_test_sb 00:16:08.930 ************************************ 00:16:08.930 14:18:00 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=128976 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128976' 00:16:08.930 Process raid pid: 128976 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128976 /var/tmp/spdk-raid.sock 00:16:08.930 14:18:00 -- common/autotest_common.sh@829 -- # '[' -z 128976 ']' 00:16:08.930 14:18:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.930 14:18:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.930 14:18:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:08.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.930 14:18:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.930 14:18:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.930 14:18:00 -- common/autotest_common.sh@10 -- # set +x 00:16:08.930 [2024-11-18 14:18:00.853947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:08.930 [2024-11-18 14:18:00.854192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.930 [2024-11-18 14:18:00.994764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.189 [2024-11-18 14:18:01.062207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.189 [2024-11-18 14:18:01.132434] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.756 14:18:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.756 14:18:01 -- common/autotest_common.sh@862 -- # return 0 00:16:09.756 14:18:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:10.015 [2024-11-18 14:18:02.002825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:10.015 [2024-11-18 14:18:02.002924] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:10.015 [2024-11-18 14:18:02.002939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.015 [2024-11-18 14:18:02.002961] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.015 [2024-11-18 14:18:02.002969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:10.015 [2024-11-18 14:18:02.003014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:10.015 [2024-11-18 14:18:02.003024] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:10.015 [2024-11-18 14:18:02.003054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.015 14:18:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.274 14:18:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:10.274 "name": "Existed_Raid", 00:16:10.274 "uuid": "11ac2a31-9133-4c34-a7ed-619f84cb07ed", 00:16:10.274 "strip_size_kb": 64, 00:16:10.274 "state": "configuring", 00:16:10.274 "raid_level": "raid0", 00:16:10.274 "superblock": true, 00:16:10.274 "num_base_bdevs": 4, 00:16:10.274 "num_base_bdevs_discovered": 0, 00:16:10.274 "num_base_bdevs_operational": 4, 00:16:10.274 "base_bdevs_list": [ 00:16:10.274 { 00:16:10.274 "name": "BaseBdev1", 00:16:10.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.274 "is_configured": false, 00:16:10.274 "data_offset": 0, 00:16:10.274 "data_size": 0 00:16:10.274 }, 00:16:10.274 { 00:16:10.274 "name": "BaseBdev2", 00:16:10.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.274 "is_configured": false, 00:16:10.274 "data_offset": 0, 00:16:10.274 "data_size": 0 00:16:10.274 }, 00:16:10.274 { 00:16:10.274 "name": "BaseBdev3", 00:16:10.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.274 "is_configured": false, 00:16:10.274 "data_offset": 0, 00:16:10.274 "data_size": 0 00:16:10.274 }, 00:16:10.274 { 00:16:10.274 "name": "BaseBdev4", 00:16:10.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.274 "is_configured": false, 00:16:10.274 "data_offset": 0, 00:16:10.274 "data_size": 0 00:16:10.274 } 00:16:10.274 ] 00:16:10.274 }' 00:16:10.274 14:18:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:10.274 14:18:02 -- common/autotest_common.sh@10 -- # set +x 00:16:10.841 14:18:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:11.099 [2024-11-18 14:18:03.014823] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.099 [2024-11-18 14:18:03.014865] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:11.099 14:18:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:11.358 [2024-11-18 14:18:03.254884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:11.358 [2024-11-18 14:18:03.254935] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:11.358 [2024-11-18 14:18:03.254947] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.358 [2024-11-18 14:18:03.254975] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.358 [2024-11-18 14:18:03.254984] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:11.358 [2024-11-18 14:18:03.255003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:11.358 [2024-11-18 14:18:03.255011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:11.358 [2024-11-18 14:18:03.255038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:11.358 14:18:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:11.617 [2024-11-18 14:18:03.516949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.617 BaseBdev1 00:16:11.617 14:18:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:11.617 14:18:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:11.617 14:18:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.617 14:18:03 -- common/autotest_common.sh@899 -- # local i 00:16:11.617 14:18:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.617 14:18:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.617 14:18:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.876 14:18:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.876 [ 00:16:11.876 { 00:16:11.876 "name": "BaseBdev1", 00:16:11.876 "aliases": [ 00:16:11.876 "cf41ac77-cb4d-410c-b6a3-8265c5aee0e8" 00:16:11.876 ], 00:16:11.876 "product_name": "Malloc disk", 00:16:11.876 "block_size": 512, 00:16:11.876 "num_blocks": 65536, 00:16:11.876 "uuid": "cf41ac77-cb4d-410c-b6a3-8265c5aee0e8", 00:16:11.876 "assigned_rate_limits": { 00:16:11.876 "rw_ios_per_sec": 0, 00:16:11.876 "rw_mbytes_per_sec": 0, 00:16:11.876 "r_mbytes_per_sec": 0, 00:16:11.876 "w_mbytes_per_sec": 0 00:16:11.876 }, 00:16:11.876 "claimed": true, 00:16:11.876 "claim_type": "exclusive_write", 00:16:11.876 "zoned": false, 00:16:11.876 "supported_io_types": { 00:16:11.876 "read": true, 00:16:11.876 "write": true, 00:16:11.876 "unmap": true, 00:16:11.876 "write_zeroes": true, 00:16:11.876 "flush": true, 00:16:11.876 "reset": true, 00:16:11.876 "compare": false, 00:16:11.876 "compare_and_write": false, 00:16:11.876 "abort": true, 00:16:11.876 "nvme_admin": false, 00:16:11.876 "nvme_io": false 00:16:11.876 }, 00:16:11.876 "memory_domains": [ 00:16:11.876 { 00:16:11.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.876 "dma_device_type": 2 00:16:11.876 } 00:16:11.876 ], 00:16:11.876 "driver_specific": {} 00:16:11.876 } 00:16:11.876 ] 00:16:11.876 14:18:03 -- common/autotest_common.sh@905 -- # return 0 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.876 14:18:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.135 14:18:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.135 "name": "Existed_Raid", 00:16:12.135 "uuid": "6c25cc8a-72ad-4b91-8093-993fd758ed60", 00:16:12.135 "strip_size_kb": 64, 00:16:12.135 "state": "configuring", 00:16:12.135 "raid_level": "raid0", 00:16:12.135 "superblock": true, 00:16:12.135 "num_base_bdevs": 4, 00:16:12.135 "num_base_bdevs_discovered": 1, 00:16:12.135 "num_base_bdevs_operational": 4, 00:16:12.135 "base_bdevs_list": [ 00:16:12.135 { 00:16:12.135 "name": "BaseBdev1", 00:16:12.135 "uuid": "cf41ac77-cb4d-410c-b6a3-8265c5aee0e8", 00:16:12.135 "is_configured": true, 00:16:12.135 "data_offset": 2048, 00:16:12.135 "data_size": 63488 00:16:12.135 }, 00:16:12.135 { 00:16:12.135 "name": "BaseBdev2", 00:16:12.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.135 "is_configured": false, 00:16:12.135 "data_offset": 0, 00:16:12.135 "data_size": 0 00:16:12.135 }, 00:16:12.135 { 00:16:12.135 "name": "BaseBdev3", 00:16:12.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.135 "is_configured": false, 00:16:12.135 "data_offset": 0, 00:16:12.135 "data_size": 0 00:16:12.135 }, 00:16:12.135 { 00:16:12.135 "name": "BaseBdev4", 00:16:12.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.135 "is_configured": false, 00:16:12.135 "data_offset": 0, 00:16:12.135 "data_size": 0 00:16:12.135 } 00:16:12.135 ] 00:16:12.135 }' 00:16:12.135 14:18:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.135 14:18:04 -- common/autotest_common.sh@10 -- # set +x 00:16:12.702 14:18:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:12.961 [2024-11-18 14:18:04.897175] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.961 [2024-11-18 14:18:04.897221] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:12.961 14:18:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:12.961 14:18:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:13.220 14:18:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:13.478 BaseBdev1 00:16:13.478 14:18:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:13.478 14:18:05 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:13.478 14:18:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.478 14:18:05 -- common/autotest_common.sh@899 -- # local i 00:16:13.478 14:18:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.479 14:18:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.479 14:18:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.479 14:18:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:13.738 [ 00:16:13.738 { 00:16:13.738 "name": "BaseBdev1", 00:16:13.738 "aliases": [ 00:16:13.738 "c87890fc-6b86-4ad7-86bb-527cf21dbf76" 00:16:13.738 ], 00:16:13.738 "product_name": "Malloc disk", 00:16:13.738 "block_size": 512, 00:16:13.738 "num_blocks": 65536, 00:16:13.738 "uuid": "c87890fc-6b86-4ad7-86bb-527cf21dbf76", 00:16:13.738 "assigned_rate_limits": { 00:16:13.738 "rw_ios_per_sec": 0, 00:16:13.738 "rw_mbytes_per_sec": 0, 00:16:13.738 "r_mbytes_per_sec": 0, 00:16:13.738 "w_mbytes_per_sec": 0 00:16:13.738 }, 00:16:13.738 "claimed": false, 00:16:13.738 "zoned": false, 00:16:13.738 "supported_io_types": { 00:16:13.738 "read": true, 00:16:13.738 "write": true, 00:16:13.738 "unmap": true, 00:16:13.738 "write_zeroes": true, 00:16:13.738 "flush": true, 00:16:13.738 "reset": true, 00:16:13.738 "compare": false, 00:16:13.738 "compare_and_write": false, 00:16:13.738 "abort": true, 00:16:13.738 "nvme_admin": false, 00:16:13.738 "nvme_io": false 00:16:13.738 }, 00:16:13.738 "memory_domains": [ 00:16:13.738 { 00:16:13.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.738 "dma_device_type": 2 00:16:13.738 } 00:16:13.738 ], 00:16:13.738 "driver_specific": {} 00:16:13.738 } 00:16:13.738 ] 00:16:13.738 14:18:05 -- common/autotest_common.sh@905 -- # return 0 00:16:13.738 14:18:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:13.996 [2024-11-18 14:18:05.870597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.996 [2024-11-18 14:18:05.872599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.996 [2024-11-18 14:18:05.872674] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.996 [2024-11-18 14:18:05.872688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.996 [2024-11-18 14:18:05.872717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.996 [2024-11-18 14:18:05.872727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.996 [2024-11-18 14:18:05.872746] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.996 14:18:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:13.997 14:18:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.997 14:18:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.997 14:18:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.997 14:18:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.997 14:18:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.997 14:18:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.255 14:18:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.255 "name": "Existed_Raid", 00:16:14.255 "uuid": "cb26189d-0f2f-4f66-b6d9-4527c66ff1c5", 00:16:14.255 "strip_size_kb": 64, 00:16:14.255 "state": "configuring", 00:16:14.255 "raid_level": "raid0", 00:16:14.255 "superblock": true, 00:16:14.255 "num_base_bdevs": 4, 00:16:14.255 "num_base_bdevs_discovered": 1, 00:16:14.255 "num_base_bdevs_operational": 4, 00:16:14.255 "base_bdevs_list": [ 00:16:14.255 { 00:16:14.255 "name": "BaseBdev1", 00:16:14.255 "uuid": "c87890fc-6b86-4ad7-86bb-527cf21dbf76", 00:16:14.255 "is_configured": true, 00:16:14.255 "data_offset": 2048, 00:16:14.255 "data_size": 63488 00:16:14.255 }, 00:16:14.255 { 00:16:14.255 "name": "BaseBdev2", 00:16:14.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.255 "is_configured": false, 00:16:14.255 "data_offset": 0, 00:16:14.255 "data_size": 0 00:16:14.255 }, 00:16:14.255 { 00:16:14.255 "name": "BaseBdev3", 00:16:14.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.255 "is_configured": false, 00:16:14.255 "data_offset": 0, 00:16:14.255 "data_size": 0 00:16:14.255 }, 00:16:14.255 { 00:16:14.255 "name": "BaseBdev4", 00:16:14.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.255 "is_configured": false, 00:16:14.255 "data_offset": 0, 00:16:14.255 "data_size": 0 00:16:14.255 } 00:16:14.255 ] 00:16:14.255 }' 00:16:14.255 14:18:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.255 14:18:06 -- common/autotest_common.sh@10 -- # set +x 00:16:14.822 14:18:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:15.081 [2024-11-18 14:18:06.911954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.081 BaseBdev2 00:16:15.081 14:18:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:15.081 14:18:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:15.081 14:18:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:15.081 14:18:06 -- common/autotest_common.sh@899 -- # local i 00:16:15.081 14:18:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:15.081 14:18:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:15.081 14:18:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.081 14:18:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:15.339 [ 00:16:15.339 { 00:16:15.339 "name": "BaseBdev2", 00:16:15.339 "aliases": [ 00:16:15.339 "3f2e36de-e27f-4171-894e-2b0df090633f" 00:16:15.339 ], 00:16:15.339 "product_name": "Malloc disk", 00:16:15.339 "block_size": 512, 00:16:15.339 "num_blocks": 65536, 00:16:15.339 "uuid": "3f2e36de-e27f-4171-894e-2b0df090633f", 00:16:15.339 "assigned_rate_limits": { 00:16:15.339 "rw_ios_per_sec": 0, 00:16:15.340 "rw_mbytes_per_sec": 0, 00:16:15.340 "r_mbytes_per_sec": 0, 00:16:15.340 "w_mbytes_per_sec": 0 00:16:15.340 }, 00:16:15.340 "claimed": true, 00:16:15.340 "claim_type": "exclusive_write", 00:16:15.340 "zoned": false, 00:16:15.340 "supported_io_types": { 00:16:15.340 "read": true, 00:16:15.340 "write": true, 00:16:15.340 "unmap": true, 00:16:15.340 "write_zeroes": true, 00:16:15.340 "flush": true, 00:16:15.340 "reset": true, 00:16:15.340 "compare": false, 00:16:15.340 "compare_and_write": false, 00:16:15.340 "abort": true, 00:16:15.340 "nvme_admin": false, 00:16:15.340 "nvme_io": false 00:16:15.340 }, 00:16:15.340 "memory_domains": [ 00:16:15.340 { 00:16:15.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.340 "dma_device_type": 2 00:16:15.340 } 00:16:15.340 ], 00:16:15.340 "driver_specific": {} 00:16:15.340 } 00:16:15.340 ] 00:16:15.340 14:18:07 -- common/autotest_common.sh@905 -- # return 0 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.340 14:18:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.598 14:18:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.598 "name": "Existed_Raid", 00:16:15.598 "uuid": "cb26189d-0f2f-4f66-b6d9-4527c66ff1c5", 00:16:15.598 "strip_size_kb": 64, 00:16:15.598 "state": "configuring", 00:16:15.598 "raid_level": "raid0", 00:16:15.598 "superblock": true, 00:16:15.598 "num_base_bdevs": 4, 00:16:15.598 "num_base_bdevs_discovered": 2, 00:16:15.598 "num_base_bdevs_operational": 4, 00:16:15.598 "base_bdevs_list": [ 00:16:15.598 { 00:16:15.598 "name": "BaseBdev1", 00:16:15.598 "uuid": "c87890fc-6b86-4ad7-86bb-527cf21dbf76", 00:16:15.598 "is_configured": true, 00:16:15.598 "data_offset": 2048, 00:16:15.598 "data_size": 63488 00:16:15.598 }, 00:16:15.598 { 00:16:15.598 "name": "BaseBdev2", 00:16:15.598 "uuid": "3f2e36de-e27f-4171-894e-2b0df090633f", 00:16:15.598 "is_configured": true, 00:16:15.598 "data_offset": 2048, 00:16:15.599 "data_size": 63488 00:16:15.599 }, 00:16:15.599 { 00:16:15.599 "name": "BaseBdev3", 00:16:15.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.599 "is_configured": false, 00:16:15.599 "data_offset": 0, 00:16:15.599 "data_size": 0 00:16:15.599 }, 00:16:15.599 { 00:16:15.599 "name": "BaseBdev4", 00:16:15.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.599 "is_configured": false, 00:16:15.599 "data_offset": 0, 00:16:15.599 "data_size": 0 00:16:15.599 } 00:16:15.599 ] 00:16:15.599 }' 00:16:15.599 14:18:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.599 14:18:07 -- common/autotest_common.sh@10 -- # set +x 00:16:16.166 14:18:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:16.424 [2024-11-18 14:18:08.419849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.424 BaseBdev3 00:16:16.424 14:18:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:16.424 14:18:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:16.424 14:18:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:16.424 14:18:08 -- common/autotest_common.sh@899 -- # local i 00:16:16.424 14:18:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:16.424 14:18:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:16.425 14:18:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:16.683 14:18:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:16.942 [ 00:16:16.942 { 00:16:16.942 "name": "BaseBdev3", 00:16:16.942 "aliases": [ 00:16:16.942 "d31df8de-d085-4a07-beb5-a2ef2c4e7bfd" 00:16:16.942 ], 00:16:16.942 "product_name": "Malloc disk", 00:16:16.942 "block_size": 512, 00:16:16.942 "num_blocks": 65536, 00:16:16.942 "uuid": "d31df8de-d085-4a07-beb5-a2ef2c4e7bfd", 00:16:16.942 "assigned_rate_limits": { 00:16:16.942 "rw_ios_per_sec": 0, 00:16:16.942 "rw_mbytes_per_sec": 0, 00:16:16.942 "r_mbytes_per_sec": 0, 00:16:16.942 "w_mbytes_per_sec": 0 00:16:16.942 }, 00:16:16.942 "claimed": true, 00:16:16.942 "claim_type": "exclusive_write", 00:16:16.942 "zoned": false, 00:16:16.942 "supported_io_types": { 00:16:16.942 "read": true, 00:16:16.942 "write": true, 00:16:16.942 "unmap": true, 00:16:16.942 "write_zeroes": true, 00:16:16.942 "flush": true, 00:16:16.942 "reset": true, 00:16:16.942 "compare": false, 00:16:16.942 "compare_and_write": false, 00:16:16.942 "abort": true, 00:16:16.942 "nvme_admin": false, 00:16:16.942 "nvme_io": false 00:16:16.942 }, 00:16:16.942 "memory_domains": [ 00:16:16.942 { 00:16:16.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.942 "dma_device_type": 2 00:16:16.942 } 00:16:16.942 ], 00:16:16.942 "driver_specific": {} 00:16:16.942 } 00:16:16.942 ] 00:16:16.942 14:18:08 -- common/autotest_common.sh@905 -- # return 0 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.942 "name": "Existed_Raid", 00:16:16.942 "uuid": "cb26189d-0f2f-4f66-b6d9-4527c66ff1c5", 00:16:16.942 "strip_size_kb": 64, 00:16:16.942 "state": "configuring", 00:16:16.942 "raid_level": "raid0", 00:16:16.942 "superblock": true, 00:16:16.942 "num_base_bdevs": 4, 00:16:16.942 "num_base_bdevs_discovered": 3, 00:16:16.942 "num_base_bdevs_operational": 4, 00:16:16.942 "base_bdevs_list": [ 00:16:16.942 { 00:16:16.942 "name": "BaseBdev1", 00:16:16.942 "uuid": "c87890fc-6b86-4ad7-86bb-527cf21dbf76", 00:16:16.942 "is_configured": true, 00:16:16.942 "data_offset": 2048, 00:16:16.942 "data_size": 63488 00:16:16.942 }, 00:16:16.942 { 00:16:16.942 "name": "BaseBdev2", 00:16:16.942 "uuid": "3f2e36de-e27f-4171-894e-2b0df090633f", 00:16:16.942 "is_configured": true, 00:16:16.942 "data_offset": 2048, 00:16:16.942 "data_size": 63488 00:16:16.942 }, 00:16:16.942 { 00:16:16.942 "name": "BaseBdev3", 00:16:16.942 "uuid": "d31df8de-d085-4a07-beb5-a2ef2c4e7bfd", 00:16:16.942 "is_configured": true, 00:16:16.942 "data_offset": 2048, 00:16:16.942 "data_size": 63488 00:16:16.942 }, 00:16:16.942 { 00:16:16.942 "name": "BaseBdev4", 00:16:16.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.942 "is_configured": false, 00:16:16.942 "data_offset": 0, 00:16:16.942 "data_size": 0 00:16:16.942 } 00:16:16.942 ] 00:16:16.942 }' 00:16:16.942 14:18:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.942 14:18:08 -- common/autotest_common.sh@10 -- # set +x 00:16:17.509 14:18:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:17.768 [2024-11-18 14:18:09.755767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.768 [2024-11-18 14:18:09.755979] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:17.768 [2024-11-18 14:18:09.755996] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:17.768 [2024-11-18 14:18:09.756157] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:16:17.768 BaseBdev4 00:16:17.768 [2024-11-18 14:18:09.756581] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:17.768 [2024-11-18 14:18:09.756605] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:17.768 [2024-11-18 14:18:09.756773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.768 14:18:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:17.768 14:18:09 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:17.768 14:18:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:17.768 14:18:09 -- common/autotest_common.sh@899 -- # local i 00:16:17.768 14:18:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:17.768 14:18:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:17.768 14:18:09 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.027 14:18:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:18.286 [ 00:16:18.286 { 00:16:18.286 "name": "BaseBdev4", 00:16:18.286 "aliases": [ 00:16:18.286 "6a5d24e0-9517-40d7-a070-947176f46b97" 00:16:18.286 ], 00:16:18.286 "product_name": "Malloc disk", 00:16:18.286 "block_size": 512, 00:16:18.286 "num_blocks": 65536, 00:16:18.286 "uuid": "6a5d24e0-9517-40d7-a070-947176f46b97", 00:16:18.286 "assigned_rate_limits": { 00:16:18.286 "rw_ios_per_sec": 0, 00:16:18.286 "rw_mbytes_per_sec": 0, 00:16:18.286 "r_mbytes_per_sec": 0, 00:16:18.286 "w_mbytes_per_sec": 0 00:16:18.286 }, 00:16:18.286 "claimed": true, 00:16:18.286 "claim_type": "exclusive_write", 00:16:18.286 "zoned": false, 00:16:18.286 "supported_io_types": { 00:16:18.286 "read": true, 00:16:18.286 "write": true, 00:16:18.286 "unmap": true, 00:16:18.286 "write_zeroes": true, 00:16:18.286 "flush": true, 00:16:18.286 "reset": true, 00:16:18.286 "compare": false, 00:16:18.286 "compare_and_write": false, 00:16:18.286 "abort": true, 00:16:18.286 "nvme_admin": false, 00:16:18.286 "nvme_io": false 00:16:18.286 }, 00:16:18.286 "memory_domains": [ 00:16:18.286 { 00:16:18.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.286 "dma_device_type": 2 00:16:18.286 } 00:16:18.286 ], 00:16:18.286 "driver_specific": {} 00:16:18.286 } 00:16:18.286 ] 00:16:18.286 14:18:10 -- common/autotest_common.sh@905 -- # return 0 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.286 14:18:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.545 14:18:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.545 "name": "Existed_Raid", 00:16:18.545 "uuid": "cb26189d-0f2f-4f66-b6d9-4527c66ff1c5", 00:16:18.545 "strip_size_kb": 64, 00:16:18.545 "state": "online", 00:16:18.545 "raid_level": "raid0", 00:16:18.545 "superblock": true, 00:16:18.545 "num_base_bdevs": 4, 00:16:18.545 "num_base_bdevs_discovered": 4, 00:16:18.545 "num_base_bdevs_operational": 4, 00:16:18.545 "base_bdevs_list": [ 00:16:18.545 { 00:16:18.545 "name": "BaseBdev1", 00:16:18.545 "uuid": "c87890fc-6b86-4ad7-86bb-527cf21dbf76", 00:16:18.545 "is_configured": true, 00:16:18.545 "data_offset": 2048, 00:16:18.545 "data_size": 63488 00:16:18.545 }, 00:16:18.545 { 00:16:18.545 "name": "BaseBdev2", 00:16:18.545 "uuid": "3f2e36de-e27f-4171-894e-2b0df090633f", 00:16:18.545 "is_configured": true, 00:16:18.545 "data_offset": 2048, 00:16:18.545 "data_size": 63488 00:16:18.545 }, 00:16:18.545 { 00:16:18.545 "name": "BaseBdev3", 00:16:18.545 "uuid": "d31df8de-d085-4a07-beb5-a2ef2c4e7bfd", 00:16:18.545 "is_configured": true, 00:16:18.545 "data_offset": 2048, 00:16:18.545 "data_size": 63488 00:16:18.545 }, 00:16:18.545 { 00:16:18.545 "name": "BaseBdev4", 00:16:18.545 "uuid": "6a5d24e0-9517-40d7-a070-947176f46b97", 00:16:18.545 "is_configured": true, 00:16:18.545 "data_offset": 2048, 00:16:18.545 "data_size": 63488 00:16:18.545 } 00:16:18.545 ] 00:16:18.545 }' 00:16:18.545 14:18:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.545 14:18:10 -- common/autotest_common.sh@10 -- # set +x 00:16:19.113 14:18:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:19.372 [2024-11-18 14:18:11.224131] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.372 [2024-11-18 14:18:11.224163] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.372 [2024-11-18 14:18:11.224233] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.372 14:18:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.373 "name": "Existed_Raid", 00:16:19.373 "uuid": "cb26189d-0f2f-4f66-b6d9-4527c66ff1c5", 00:16:19.373 "strip_size_kb": 64, 00:16:19.373 "state": "offline", 00:16:19.373 "raid_level": "raid0", 00:16:19.373 "superblock": true, 00:16:19.373 "num_base_bdevs": 4, 00:16:19.373 "num_base_bdevs_discovered": 3, 00:16:19.373 "num_base_bdevs_operational": 3, 00:16:19.373 "base_bdevs_list": [ 00:16:19.373 { 00:16:19.373 "name": null, 00:16:19.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.373 "is_configured": false, 00:16:19.373 "data_offset": 2048, 00:16:19.373 "data_size": 63488 00:16:19.373 }, 00:16:19.373 { 00:16:19.373 "name": "BaseBdev2", 00:16:19.373 "uuid": "3f2e36de-e27f-4171-894e-2b0df090633f", 00:16:19.373 "is_configured": true, 00:16:19.373 "data_offset": 2048, 00:16:19.373 "data_size": 63488 00:16:19.373 }, 00:16:19.373 { 00:16:19.373 "name": "BaseBdev3", 00:16:19.373 "uuid": "d31df8de-d085-4a07-beb5-a2ef2c4e7bfd", 00:16:19.373 "is_configured": true, 00:16:19.373 "data_offset": 2048, 00:16:19.373 "data_size": 63488 00:16:19.373 }, 00:16:19.373 { 00:16:19.373 "name": "BaseBdev4", 00:16:19.373 "uuid": "6a5d24e0-9517-40d7-a070-947176f46b97", 00:16:19.373 "is_configured": true, 00:16:19.373 "data_offset": 2048, 00:16:19.373 "data_size": 63488 00:16:19.373 } 00:16:19.373 ] 00:16:19.373 }' 00:16:19.373 14:18:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.373 14:18:11 -- common/autotest_common.sh@10 -- # set +x 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.316 14:18:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:20.574 [2024-11-18 14:18:12.503919] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.574 14:18:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:20.574 14:18:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:20.574 14:18:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.574 14:18:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:20.832 14:18:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:20.832 14:18:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.832 14:18:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:20.832 [2024-11-18 14:18:12.884815] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.091 14:18:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:21.091 14:18:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:21.091 14:18:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.091 14:18:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:21.091 14:18:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:21.091 14:18:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.091 14:18:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:21.350 [2024-11-18 14:18:13.330370] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:21.350 [2024-11-18 14:18:13.330427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:16:21.350 14:18:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:21.350 14:18:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:21.350 14:18:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.350 14:18:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:21.609 14:18:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:21.609 14:18:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:21.609 14:18:13 -- bdev/bdev_raid.sh@287 -- # killprocess 128976 00:16:21.609 14:18:13 -- common/autotest_common.sh@936 -- # '[' -z 128976 ']' 00:16:21.609 14:18:13 -- common/autotest_common.sh@940 -- # kill -0 128976 00:16:21.609 14:18:13 -- common/autotest_common.sh@941 -- # uname 00:16:21.609 14:18:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.609 14:18:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128976 00:16:21.609 killing process with pid 128976 00:16:21.609 14:18:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:21.609 14:18:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:21.609 14:18:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128976' 00:16:21.609 14:18:13 -- common/autotest_common.sh@955 -- # kill 128976 00:16:21.609 14:18:13 -- common/autotest_common.sh@960 -- # wait 128976 00:16:21.609 [2024-11-18 14:18:13.596921] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.609 [2024-11-18 14:18:13.597008] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.868 ************************************ 00:16:21.868 END TEST raid_state_function_test_sb 00:16:21.868 ************************************ 00:16:21.868 14:18:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:21.868 00:16:21.868 real 0m13.098s 00:16:21.868 user 0m24.242s 00:16:21.868 sys 0m1.514s 00:16:21.868 14:18:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:21.868 14:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:21.868 14:18:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:21.868 14:18:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:21.868 14:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.868 14:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:22.127 ************************************ 00:16:22.127 START TEST raid_superblock_test 00:16:22.127 ************************************ 00:16:22.127 14:18:13 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=129405 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129405 /var/tmp/spdk-raid.sock 00:16:22.127 14:18:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:22.127 14:18:13 -- common/autotest_common.sh@829 -- # '[' -z 129405 ']' 00:16:22.127 14:18:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:22.127 14:18:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.127 14:18:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:22.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:22.127 14:18:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.127 14:18:13 -- common/autotest_common.sh@10 -- # set +x 00:16:22.127 [2024-11-18 14:18:14.006978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:22.127 [2024-11-18 14:18:14.007291] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129405 ] 00:16:22.127 [2024-11-18 14:18:14.153490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.386 [2024-11-18 14:18:14.218676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.386 [2024-11-18 14:18:14.288882] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.953 14:18:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.953 14:18:14 -- common/autotest_common.sh@862 -- # return 0 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:22.953 14:18:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:23.211 malloc1 00:16:23.211 14:18:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.470 [2024-11-18 14:18:15.373046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.470 [2024-11-18 14:18:15.373166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.470 [2024-11-18 14:18:15.373216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:23.470 [2024-11-18 14:18:15.373272] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.470 [2024-11-18 14:18:15.376214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.470 [2024-11-18 14:18:15.376285] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.470 pt1 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.470 14:18:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:23.729 malloc2 00:16:23.729 14:18:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.729 [2024-11-18 14:18:15.798659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.729 [2024-11-18 14:18:15.798726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.729 [2024-11-18 14:18:15.798766] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:23.729 [2024-11-18 14:18:15.798815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.729 [2024-11-18 14:18:15.801054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.729 [2024-11-18 14:18:15.801107] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.988 pt2 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.988 14:18:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:23.988 malloc3 00:16:23.988 14:18:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.247 [2024-11-18 14:18:16.185931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.247 [2024-11-18 14:18:16.185997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.247 [2024-11-18 14:18:16.186038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.247 [2024-11-18 14:18:16.186099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.247 [2024-11-18 14:18:16.188347] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.247 [2024-11-18 14:18:16.188407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.247 pt3 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:24.247 14:18:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:24.506 malloc4 00:16:24.506 14:18:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:24.764 [2024-11-18 14:18:16.631662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:24.764 [2024-11-18 14:18:16.631742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.764 [2024-11-18 14:18:16.631778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:24.765 [2024-11-18 14:18:16.631826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.765 [2024-11-18 14:18:16.634076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.765 [2024-11-18 14:18:16.634135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:24.765 pt4 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:24.765 [2024-11-18 14:18:16.815817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.765 [2024-11-18 14:18:16.817826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.765 [2024-11-18 14:18:16.817901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.765 [2024-11-18 14:18:16.817957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:24.765 [2024-11-18 14:18:16.818205] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:16:24.765 [2024-11-18 14:18:16.818230] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:24.765 [2024-11-18 14:18:16.818364] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:24.765 [2024-11-18 14:18:16.818754] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:16:24.765 [2024-11-18 14:18:16.818778] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:16:24.765 [2024-11-18 14:18:16.818944] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.765 14:18:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.023 14:18:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.023 "name": "raid_bdev1", 00:16:25.023 "uuid": "95cbdc8d-8ed4-4f25-aca8-633d8f79f480", 00:16:25.023 "strip_size_kb": 64, 00:16:25.023 "state": "online", 00:16:25.023 "raid_level": "raid0", 00:16:25.023 "superblock": true, 00:16:25.023 "num_base_bdevs": 4, 00:16:25.023 "num_base_bdevs_discovered": 4, 00:16:25.023 "num_base_bdevs_operational": 4, 00:16:25.023 "base_bdevs_list": [ 00:16:25.023 { 00:16:25.023 "name": "pt1", 00:16:25.023 "uuid": "c2f4b032-6262-5271-b3a1-06910a2d0020", 00:16:25.023 "is_configured": true, 00:16:25.023 "data_offset": 2048, 00:16:25.023 "data_size": 63488 00:16:25.023 }, 00:16:25.023 { 00:16:25.023 "name": "pt2", 00:16:25.023 "uuid": "a8bda9ee-73ae-5b3a-840d-b6c92d4e9390", 00:16:25.023 "is_configured": true, 00:16:25.023 "data_offset": 2048, 00:16:25.023 "data_size": 63488 00:16:25.023 }, 00:16:25.023 { 00:16:25.023 "name": "pt3", 00:16:25.023 "uuid": "c732dfbb-01c4-548b-8ee8-86301f0a08cd", 00:16:25.023 "is_configured": true, 00:16:25.023 "data_offset": 2048, 00:16:25.023 "data_size": 63488 00:16:25.023 }, 00:16:25.023 { 00:16:25.023 "name": "pt4", 00:16:25.023 "uuid": "c83c014d-927b-5771-83e1-edee04167552", 00:16:25.023 "is_configured": true, 00:16:25.024 "data_offset": 2048, 00:16:25.024 "data_size": 63488 00:16:25.024 } 00:16:25.024 ] 00:16:25.024 }' 00:16:25.024 14:18:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.024 14:18:17 -- common/autotest_common.sh@10 -- # set +x 00:16:25.590 14:18:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.590 14:18:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:25.849 [2024-11-18 14:18:17.812206] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.849 14:18:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=95cbdc8d-8ed4-4f25-aca8-633d8f79f480 00:16:25.849 14:18:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 95cbdc8d-8ed4-4f25-aca8-633d8f79f480 ']' 00:16:25.849 14:18:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:26.107 [2024-11-18 14:18:18.059950] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.107 [2024-11-18 14:18:18.059979] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.107 [2024-11-18 14:18:18.060068] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.107 [2024-11-18 14:18:18.060145] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.107 [2024-11-18 14:18:18.060158] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:16:26.107 14:18:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.107 14:18:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:26.366 14:18:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:26.366 14:18:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:26.366 14:18:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.366 14:18:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:26.624 14:18:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.624 14:18:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:26.882 14:18:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:26.882 14:18:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:27.141 14:18:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:27.141 14:18:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:27.141 14:18:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:27.141 14:18:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:27.399 14:18:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:27.399 14:18:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:27.399 14:18:19 -- common/autotest_common.sh@650 -- # local es=0 00:16:27.399 14:18:19 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:27.399 14:18:19 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.399 14:18:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.399 14:18:19 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.399 14:18:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.399 14:18:19 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.399 14:18:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.399 14:18:19 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.399 14:18:19 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:27.399 14:18:19 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:27.657 [2024-11-18 14:18:19.580180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:27.657 [2024-11-18 14:18:19.582127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:27.657 [2024-11-18 14:18:19.582200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:27.657 [2024-11-18 14:18:19.582240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:27.657 [2024-11-18 14:18:19.582295] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:27.657 [2024-11-18 14:18:19.582434] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:27.657 [2024-11-18 14:18:19.582473] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:27.658 [2024-11-18 14:18:19.582531] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:16:27.658 [2024-11-18 14:18:19.582576] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.658 [2024-11-18 14:18:19.582586] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:16:27.658 request: 00:16:27.658 { 00:16:27.658 "name": "raid_bdev1", 00:16:27.658 "raid_level": "raid0", 00:16:27.658 "base_bdevs": [ 00:16:27.658 "malloc1", 00:16:27.658 "malloc2", 00:16:27.658 "malloc3", 00:16:27.658 "malloc4" 00:16:27.658 ], 00:16:27.658 "superblock": false, 00:16:27.658 "strip_size_kb": 64, 00:16:27.658 "method": "bdev_raid_create", 00:16:27.658 "req_id": 1 00:16:27.658 } 00:16:27.658 Got JSON-RPC error response 00:16:27.658 response: 00:16:27.658 { 00:16:27.658 "code": -17, 00:16:27.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:27.658 } 00:16:27.658 14:18:19 -- common/autotest_common.sh@653 -- # es=1 00:16:27.658 14:18:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.658 14:18:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.658 14:18:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.658 14:18:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.658 14:18:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.917 [2024-11-18 14:18:19.960198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.917 [2024-11-18 14:18:19.960295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.917 [2024-11-18 14:18:19.960336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:27.917 [2024-11-18 14:18:19.960368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.917 [2024-11-18 14:18:19.962670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.917 [2024-11-18 14:18:19.962745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.917 [2024-11-18 14:18:19.962839] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:27.917 [2024-11-18 14:18:19.962919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.917 pt1 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.917 14:18:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.175 14:18:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.175 "name": "raid_bdev1", 00:16:28.175 "uuid": "95cbdc8d-8ed4-4f25-aca8-633d8f79f480", 00:16:28.175 "strip_size_kb": 64, 00:16:28.175 "state": "configuring", 00:16:28.175 "raid_level": "raid0", 00:16:28.175 "superblock": true, 00:16:28.175 "num_base_bdevs": 4, 00:16:28.175 "num_base_bdevs_discovered": 1, 00:16:28.175 "num_base_bdevs_operational": 4, 00:16:28.175 "base_bdevs_list": [ 00:16:28.175 { 00:16:28.175 "name": "pt1", 00:16:28.175 "uuid": "c2f4b032-6262-5271-b3a1-06910a2d0020", 00:16:28.175 "is_configured": true, 00:16:28.175 "data_offset": 2048, 00:16:28.175 "data_size": 63488 00:16:28.175 }, 00:16:28.175 { 00:16:28.175 "name": null, 00:16:28.175 "uuid": "a8bda9ee-73ae-5b3a-840d-b6c92d4e9390", 00:16:28.175 "is_configured": false, 00:16:28.175 "data_offset": 2048, 00:16:28.175 "data_size": 63488 00:16:28.175 }, 00:16:28.175 { 00:16:28.175 "name": null, 00:16:28.175 "uuid": "c732dfbb-01c4-548b-8ee8-86301f0a08cd", 00:16:28.175 "is_configured": false, 00:16:28.175 "data_offset": 2048, 00:16:28.175 "data_size": 63488 00:16:28.175 }, 00:16:28.175 { 00:16:28.175 "name": null, 00:16:28.175 "uuid": "c83c014d-927b-5771-83e1-edee04167552", 00:16:28.175 "is_configured": false, 00:16:28.175 "data_offset": 2048, 00:16:28.175 "data_size": 63488 00:16:28.175 } 00:16:28.175 ] 00:16:28.175 }' 00:16:28.176 14:18:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.176 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:16:28.804 14:18:20 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:16:28.804 14:18:20 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.073 [2024-11-18 14:18:20.980370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.073 [2024-11-18 14:18:20.980441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.073 [2024-11-18 14:18:20.980489] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:29.073 [2024-11-18 14:18:20.980514] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.073 [2024-11-18 14:18:20.980883] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.073 [2024-11-18 14:18:20.980942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.073 [2024-11-18 14:18:20.981021] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:29.073 [2024-11-18 14:18:20.981046] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.073 pt2 00:16:29.073 14:18:20 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:29.334 [2024-11-18 14:18:21.160408] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.334 14:18:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.593 14:18:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.593 "name": "raid_bdev1", 00:16:29.593 "uuid": "95cbdc8d-8ed4-4f25-aca8-633d8f79f480", 00:16:29.593 "strip_size_kb": 64, 00:16:29.593 "state": "configuring", 00:16:29.593 "raid_level": "raid0", 00:16:29.593 "superblock": true, 00:16:29.593 "num_base_bdevs": 4, 00:16:29.593 "num_base_bdevs_discovered": 1, 00:16:29.593 "num_base_bdevs_operational": 4, 00:16:29.593 "base_bdevs_list": [ 00:16:29.593 { 00:16:29.593 "name": "pt1", 00:16:29.593 "uuid": "c2f4b032-6262-5271-b3a1-06910a2d0020", 00:16:29.593 "is_configured": true, 00:16:29.593 "data_offset": 2048, 00:16:29.593 "data_size": 63488 00:16:29.593 }, 00:16:29.593 { 00:16:29.593 "name": null, 00:16:29.593 "uuid": "a8bda9ee-73ae-5b3a-840d-b6c92d4e9390", 00:16:29.593 "is_configured": false, 00:16:29.593 "data_offset": 2048, 00:16:29.593 "data_size": 63488 00:16:29.593 }, 00:16:29.593 { 00:16:29.593 "name": null, 00:16:29.593 "uuid": "c732dfbb-01c4-548b-8ee8-86301f0a08cd", 00:16:29.593 "is_configured": false, 00:16:29.593 "data_offset": 2048, 00:16:29.593 "data_size": 63488 00:16:29.593 }, 00:16:29.593 { 00:16:29.593 "name": null, 00:16:29.593 "uuid": "c83c014d-927b-5771-83e1-edee04167552", 00:16:29.593 "is_configured": false, 00:16:29.593 "data_offset": 2048, 00:16:29.593 "data_size": 63488 00:16:29.593 } 00:16:29.593 ] 00:16:29.593 }' 00:16:29.593 14:18:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.593 14:18:21 -- common/autotest_common.sh@10 -- # set +x 00:16:30.162 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:30.162 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:30.162 14:18:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.162 [2024-11-18 14:18:22.176561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.162 [2024-11-18 14:18:22.176622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.162 [2024-11-18 14:18:22.176662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:30.162 [2024-11-18 14:18:22.176688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.162 [2024-11-18 14:18:22.177706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.162 [2024-11-18 14:18:22.177771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.162 [2024-11-18 14:18:22.177844] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:30.162 [2024-11-18 14:18:22.177870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.162 pt2 00:16:30.162 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:30.162 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:30.162 14:18:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.422 [2024-11-18 14:18:22.440637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.422 [2024-11-18 14:18:22.440712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.422 [2024-11-18 14:18:22.440745] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:30.422 [2024-11-18 14:18:22.440773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.422 [2024-11-18 14:18:22.441131] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.422 [2024-11-18 14:18:22.441193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.422 [2024-11-18 14:18:22.441259] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:30.422 [2024-11-18 14:18:22.441282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:30.422 pt3 00:16:30.422 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:30.422 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:30.422 14:18:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:30.682 [2024-11-18 14:18:22.684660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:30.682 [2024-11-18 14:18:22.684727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.682 [2024-11-18 14:18:22.684761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:30.682 [2024-11-18 14:18:22.684792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.682 [2024-11-18 14:18:22.685135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.682 [2024-11-18 14:18:22.685199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:30.682 [2024-11-18 14:18:22.685265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:30.682 [2024-11-18 14:18:22.685289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:30.682 [2024-11-18 14:18:22.685414] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:16:30.682 [2024-11-18 14:18:22.685439] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:30.682 [2024-11-18 14:18:22.685521] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:16:30.682 [2024-11-18 14:18:22.685848] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:16:30.682 [2024-11-18 14:18:22.685873] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:16:30.682 [2024-11-18 14:18:22.685970] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.682 pt4 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.682 14:18:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.942 14:18:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.942 "name": "raid_bdev1", 00:16:30.942 "uuid": "95cbdc8d-8ed4-4f25-aca8-633d8f79f480", 00:16:30.942 "strip_size_kb": 64, 00:16:30.942 "state": "online", 00:16:30.942 "raid_level": "raid0", 00:16:30.942 "superblock": true, 00:16:30.942 "num_base_bdevs": 4, 00:16:30.942 "num_base_bdevs_discovered": 4, 00:16:30.942 "num_base_bdevs_operational": 4, 00:16:30.942 "base_bdevs_list": [ 00:16:30.942 { 00:16:30.942 "name": "pt1", 00:16:30.942 "uuid": "c2f4b032-6262-5271-b3a1-06910a2d0020", 00:16:30.942 "is_configured": true, 00:16:30.942 "data_offset": 2048, 00:16:30.942 "data_size": 63488 00:16:30.942 }, 00:16:30.942 { 00:16:30.942 "name": "pt2", 00:16:30.942 "uuid": "a8bda9ee-73ae-5b3a-840d-b6c92d4e9390", 00:16:30.942 "is_configured": true, 00:16:30.942 "data_offset": 2048, 00:16:30.942 "data_size": 63488 00:16:30.942 }, 00:16:30.942 { 00:16:30.942 "name": "pt3", 00:16:30.942 "uuid": "c732dfbb-01c4-548b-8ee8-86301f0a08cd", 00:16:30.942 "is_configured": true, 00:16:30.942 "data_offset": 2048, 00:16:30.942 "data_size": 63488 00:16:30.942 }, 00:16:30.942 { 00:16:30.942 "name": "pt4", 00:16:30.942 "uuid": "c83c014d-927b-5771-83e1-edee04167552", 00:16:30.942 "is_configured": true, 00:16:30.942 "data_offset": 2048, 00:16:30.942 "data_size": 63488 00:16:30.942 } 00:16:30.942 ] 00:16:30.942 }' 00:16:30.942 14:18:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.942 14:18:22 -- common/autotest_common.sh@10 -- # set +x 00:16:31.510 14:18:23 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:31.510 14:18:23 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:31.770 [2024-11-18 14:18:23.805115] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.770 14:18:23 -- bdev/bdev_raid.sh@430 -- # '[' 95cbdc8d-8ed4-4f25-aca8-633d8f79f480 '!=' 95cbdc8d-8ed4-4f25-aca8-633d8f79f480 ']' 00:16:31.770 14:18:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:31.770 14:18:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:31.770 14:18:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:31.770 14:18:23 -- bdev/bdev_raid.sh@511 -- # killprocess 129405 00:16:31.770 14:18:23 -- common/autotest_common.sh@936 -- # '[' -z 129405 ']' 00:16:31.770 14:18:23 -- common/autotest_common.sh@940 -- # kill -0 129405 00:16:31.770 14:18:23 -- common/autotest_common.sh@941 -- # uname 00:16:31.770 14:18:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.770 14:18:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129405 00:16:32.029 killing process with pid 129405 00:16:32.029 14:18:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:32.029 14:18:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:32.029 14:18:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129405' 00:16:32.029 14:18:23 -- common/autotest_common.sh@955 -- # kill 129405 00:16:32.029 14:18:23 -- common/autotest_common.sh@960 -- # wait 129405 00:16:32.029 [2024-11-18 14:18:23.849159] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.029 [2024-11-18 14:18:23.849237] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.029 [2024-11-18 14:18:23.849334] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.029 [2024-11-18 14:18:23.849358] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:16:32.029 [2024-11-18 14:18:23.900385] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.289 ************************************ 00:16:32.289 END TEST raid_superblock_test 00:16:32.289 ************************************ 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:32.289 00:16:32.289 real 0m10.240s 00:16:32.289 user 0m18.578s 00:16:32.289 sys 0m1.299s 00:16:32.289 14:18:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:32.289 14:18:24 -- common/autotest_common.sh@10 -- # set +x 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:32.289 14:18:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:32.289 14:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.289 14:18:24 -- common/autotest_common.sh@10 -- # set +x 00:16:32.289 ************************************ 00:16:32.289 START TEST raid_state_function_test 00:16:32.289 ************************************ 00:16:32.289 14:18:24 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=129723 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129723' 00:16:32.289 Process raid pid: 129723 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:32.289 14:18:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129723 /var/tmp/spdk-raid.sock 00:16:32.289 14:18:24 -- common/autotest_common.sh@829 -- # '[' -z 129723 ']' 00:16:32.289 14:18:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:32.289 14:18:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:32.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:32.289 14:18:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:32.289 14:18:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:32.289 14:18:24 -- common/autotest_common.sh@10 -- # set +x 00:16:32.289 [2024-11-18 14:18:24.308431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:32.289 [2024-11-18 14:18:24.308669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.549 [2024-11-18 14:18:24.453743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.549 [2024-11-18 14:18:24.522020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.549 [2024-11-18 14:18:24.592227] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.487 14:18:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.487 14:18:25 -- common/autotest_common.sh@862 -- # return 0 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:33.487 [2024-11-18 14:18:25.498671] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.487 [2024-11-18 14:18:25.498764] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.487 [2024-11-18 14:18:25.498779] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.487 [2024-11-18 14:18:25.498801] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.487 [2024-11-18 14:18:25.498810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:33.487 [2024-11-18 14:18:25.498854] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:33.487 [2024-11-18 14:18:25.498865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:33.487 [2024-11-18 14:18:25.498895] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.487 14:18:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.746 14:18:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.746 "name": "Existed_Raid", 00:16:33.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.746 "strip_size_kb": 64, 00:16:33.746 "state": "configuring", 00:16:33.746 "raid_level": "concat", 00:16:33.746 "superblock": false, 00:16:33.746 "num_base_bdevs": 4, 00:16:33.746 "num_base_bdevs_discovered": 0, 00:16:33.746 "num_base_bdevs_operational": 4, 00:16:33.746 "base_bdevs_list": [ 00:16:33.746 { 00:16:33.746 "name": "BaseBdev1", 00:16:33.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.746 "is_configured": false, 00:16:33.746 "data_offset": 0, 00:16:33.746 "data_size": 0 00:16:33.746 }, 00:16:33.746 { 00:16:33.746 "name": "BaseBdev2", 00:16:33.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.746 "is_configured": false, 00:16:33.746 "data_offset": 0, 00:16:33.746 "data_size": 0 00:16:33.746 }, 00:16:33.746 { 00:16:33.746 "name": "BaseBdev3", 00:16:33.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.746 "is_configured": false, 00:16:33.746 "data_offset": 0, 00:16:33.746 "data_size": 0 00:16:33.746 }, 00:16:33.746 { 00:16:33.746 "name": "BaseBdev4", 00:16:33.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.746 "is_configured": false, 00:16:33.746 "data_offset": 0, 00:16:33.746 "data_size": 0 00:16:33.746 } 00:16:33.746 ] 00:16:33.746 }' 00:16:33.746 14:18:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.746 14:18:25 -- common/autotest_common.sh@10 -- # set +x 00:16:34.313 14:18:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.572 [2024-11-18 14:18:26.566699] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.572 [2024-11-18 14:18:26.566736] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:34.572 14:18:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:34.832 [2024-11-18 14:18:26.810785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.832 [2024-11-18 14:18:26.810840] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.832 [2024-11-18 14:18:26.810852] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.832 [2024-11-18 14:18:26.810880] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.832 [2024-11-18 14:18:26.810890] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.832 [2024-11-18 14:18:26.810908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.832 [2024-11-18 14:18:26.810916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:34.832 [2024-11-18 14:18:26.810943] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:34.832 14:18:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:35.091 [2024-11-18 14:18:27.012840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.091 BaseBdev1 00:16:35.091 14:18:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:35.091 14:18:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:35.091 14:18:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.091 14:18:27 -- common/autotest_common.sh@899 -- # local i 00:16:35.091 14:18:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.091 14:18:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.091 14:18:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.350 14:18:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.609 [ 00:16:35.609 { 00:16:35.609 "name": "BaseBdev1", 00:16:35.609 "aliases": [ 00:16:35.609 "57376fc0-6420-45ef-b1e5-6b96fd85ce89" 00:16:35.609 ], 00:16:35.609 "product_name": "Malloc disk", 00:16:35.609 "block_size": 512, 00:16:35.609 "num_blocks": 65536, 00:16:35.609 "uuid": "57376fc0-6420-45ef-b1e5-6b96fd85ce89", 00:16:35.609 "assigned_rate_limits": { 00:16:35.609 "rw_ios_per_sec": 0, 00:16:35.609 "rw_mbytes_per_sec": 0, 00:16:35.609 "r_mbytes_per_sec": 0, 00:16:35.609 "w_mbytes_per_sec": 0 00:16:35.609 }, 00:16:35.609 "claimed": true, 00:16:35.609 "claim_type": "exclusive_write", 00:16:35.609 "zoned": false, 00:16:35.609 "supported_io_types": { 00:16:35.609 "read": true, 00:16:35.609 "write": true, 00:16:35.609 "unmap": true, 00:16:35.609 "write_zeroes": true, 00:16:35.609 "flush": true, 00:16:35.609 "reset": true, 00:16:35.609 "compare": false, 00:16:35.609 "compare_and_write": false, 00:16:35.609 "abort": true, 00:16:35.609 "nvme_admin": false, 00:16:35.609 "nvme_io": false 00:16:35.609 }, 00:16:35.609 "memory_domains": [ 00:16:35.609 { 00:16:35.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.609 "dma_device_type": 2 00:16:35.609 } 00:16:35.609 ], 00:16:35.609 "driver_specific": {} 00:16:35.609 } 00:16:35.609 ] 00:16:35.609 14:18:27 -- common/autotest_common.sh@905 -- # return 0 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.609 "name": "Existed_Raid", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "strip_size_kb": 64, 00:16:35.609 "state": "configuring", 00:16:35.609 "raid_level": "concat", 00:16:35.609 "superblock": false, 00:16:35.609 "num_base_bdevs": 4, 00:16:35.609 "num_base_bdevs_discovered": 1, 00:16:35.609 "num_base_bdevs_operational": 4, 00:16:35.609 "base_bdevs_list": [ 00:16:35.609 { 00:16:35.609 "name": "BaseBdev1", 00:16:35.609 "uuid": "57376fc0-6420-45ef-b1e5-6b96fd85ce89", 00:16:35.609 "is_configured": true, 00:16:35.609 "data_offset": 0, 00:16:35.609 "data_size": 65536 00:16:35.609 }, 00:16:35.609 { 00:16:35.609 "name": "BaseBdev2", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "is_configured": false, 00:16:35.609 "data_offset": 0, 00:16:35.609 "data_size": 0 00:16:35.609 }, 00:16:35.609 { 00:16:35.609 "name": "BaseBdev3", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "is_configured": false, 00:16:35.609 "data_offset": 0, 00:16:35.609 "data_size": 0 00:16:35.609 }, 00:16:35.609 { 00:16:35.609 "name": "BaseBdev4", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "is_configured": false, 00:16:35.609 "data_offset": 0, 00:16:35.609 "data_size": 0 00:16:35.609 } 00:16:35.609 ] 00:16:35.609 }' 00:16:35.609 14:18:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.609 14:18:27 -- common/autotest_common.sh@10 -- # set +x 00:16:36.544 14:18:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:36.544 [2024-11-18 14:18:28.445065] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.545 [2024-11-18 14:18:28.445117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:36.545 14:18:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:36.545 14:18:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:36.803 [2024-11-18 14:18:28.697191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.803 [2024-11-18 14:18:28.699281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.803 [2024-11-18 14:18:28.699358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.803 [2024-11-18 14:18:28.699371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.803 [2024-11-18 14:18:28.699400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.803 [2024-11-18 14:18:28.699410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.803 [2024-11-18 14:18:28.699430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.803 14:18:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.062 14:18:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.062 "name": "Existed_Raid", 00:16:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.062 "strip_size_kb": 64, 00:16:37.062 "state": "configuring", 00:16:37.062 "raid_level": "concat", 00:16:37.062 "superblock": false, 00:16:37.062 "num_base_bdevs": 4, 00:16:37.062 "num_base_bdevs_discovered": 1, 00:16:37.062 "num_base_bdevs_operational": 4, 00:16:37.062 "base_bdevs_list": [ 00:16:37.062 { 00:16:37.062 "name": "BaseBdev1", 00:16:37.062 "uuid": "57376fc0-6420-45ef-b1e5-6b96fd85ce89", 00:16:37.062 "is_configured": true, 00:16:37.062 "data_offset": 0, 00:16:37.062 "data_size": 65536 00:16:37.062 }, 00:16:37.062 { 00:16:37.062 "name": "BaseBdev2", 00:16:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.062 "is_configured": false, 00:16:37.062 "data_offset": 0, 00:16:37.062 "data_size": 0 00:16:37.062 }, 00:16:37.062 { 00:16:37.062 "name": "BaseBdev3", 00:16:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.062 "is_configured": false, 00:16:37.062 "data_offset": 0, 00:16:37.062 "data_size": 0 00:16:37.062 }, 00:16:37.062 { 00:16:37.062 "name": "BaseBdev4", 00:16:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.062 "is_configured": false, 00:16:37.062 "data_offset": 0, 00:16:37.062 "data_size": 0 00:16:37.062 } 00:16:37.062 ] 00:16:37.062 }' 00:16:37.062 14:18:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.062 14:18:28 -- common/autotest_common.sh@10 -- # set +x 00:16:37.663 14:18:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.921 [2024-11-18 14:18:29.843842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.921 BaseBdev2 00:16:37.921 14:18:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:37.921 14:18:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:37.921 14:18:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:37.921 14:18:29 -- common/autotest_common.sh@899 -- # local i 00:16:37.921 14:18:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:37.921 14:18:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:37.921 14:18:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.179 14:18:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.438 [ 00:16:38.438 { 00:16:38.438 "name": "BaseBdev2", 00:16:38.438 "aliases": [ 00:16:38.438 "8b2bb84e-ce3e-437c-9a03-61602d542520" 00:16:38.438 ], 00:16:38.438 "product_name": "Malloc disk", 00:16:38.438 "block_size": 512, 00:16:38.438 "num_blocks": 65536, 00:16:38.438 "uuid": "8b2bb84e-ce3e-437c-9a03-61602d542520", 00:16:38.438 "assigned_rate_limits": { 00:16:38.438 "rw_ios_per_sec": 0, 00:16:38.438 "rw_mbytes_per_sec": 0, 00:16:38.438 "r_mbytes_per_sec": 0, 00:16:38.438 "w_mbytes_per_sec": 0 00:16:38.438 }, 00:16:38.438 "claimed": true, 00:16:38.438 "claim_type": "exclusive_write", 00:16:38.438 "zoned": false, 00:16:38.438 "supported_io_types": { 00:16:38.438 "read": true, 00:16:38.438 "write": true, 00:16:38.438 "unmap": true, 00:16:38.438 "write_zeroes": true, 00:16:38.438 "flush": true, 00:16:38.438 "reset": true, 00:16:38.438 "compare": false, 00:16:38.438 "compare_and_write": false, 00:16:38.438 "abort": true, 00:16:38.438 "nvme_admin": false, 00:16:38.438 "nvme_io": false 00:16:38.438 }, 00:16:38.438 "memory_domains": [ 00:16:38.438 { 00:16:38.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.438 "dma_device_type": 2 00:16:38.438 } 00:16:38.438 ], 00:16:38.438 "driver_specific": {} 00:16:38.438 } 00:16:38.438 ] 00:16:38.438 14:18:30 -- common/autotest_common.sh@905 -- # return 0 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.438 14:18:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.697 14:18:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.697 "name": "Existed_Raid", 00:16:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.697 "strip_size_kb": 64, 00:16:38.697 "state": "configuring", 00:16:38.697 "raid_level": "concat", 00:16:38.697 "superblock": false, 00:16:38.697 "num_base_bdevs": 4, 00:16:38.697 "num_base_bdevs_discovered": 2, 00:16:38.697 "num_base_bdevs_operational": 4, 00:16:38.697 "base_bdevs_list": [ 00:16:38.697 { 00:16:38.697 "name": "BaseBdev1", 00:16:38.697 "uuid": "57376fc0-6420-45ef-b1e5-6b96fd85ce89", 00:16:38.697 "is_configured": true, 00:16:38.697 "data_offset": 0, 00:16:38.697 "data_size": 65536 00:16:38.697 }, 00:16:38.697 { 00:16:38.697 "name": "BaseBdev2", 00:16:38.697 "uuid": "8b2bb84e-ce3e-437c-9a03-61602d542520", 00:16:38.697 "is_configured": true, 00:16:38.697 "data_offset": 0, 00:16:38.697 "data_size": 65536 00:16:38.697 }, 00:16:38.697 { 00:16:38.697 "name": "BaseBdev3", 00:16:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.697 "is_configured": false, 00:16:38.697 "data_offset": 0, 00:16:38.697 "data_size": 0 00:16:38.697 }, 00:16:38.697 { 00:16:38.697 "name": "BaseBdev4", 00:16:38.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.697 "is_configured": false, 00:16:38.697 "data_offset": 0, 00:16:38.697 "data_size": 0 00:16:38.697 } 00:16:38.697 ] 00:16:38.697 }' 00:16:38.697 14:18:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.697 14:18:30 -- common/autotest_common.sh@10 -- # set +x 00:16:39.264 14:18:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:39.523 [2024-11-18 14:18:31.388652] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.523 BaseBdev3 00:16:39.523 14:18:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:39.523 14:18:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:39.523 14:18:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.523 14:18:31 -- common/autotest_common.sh@899 -- # local i 00:16:39.523 14:18:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.523 14:18:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.523 14:18:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.783 14:18:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:39.783 [ 00:16:39.783 { 00:16:39.783 "name": "BaseBdev3", 00:16:39.783 "aliases": [ 00:16:39.783 "a09b8903-4ead-423d-a4e2-6a732a8382ea" 00:16:39.783 ], 00:16:39.783 "product_name": "Malloc disk", 00:16:39.783 "block_size": 512, 00:16:39.783 "num_blocks": 65536, 00:16:39.783 "uuid": "a09b8903-4ead-423d-a4e2-6a732a8382ea", 00:16:39.783 "assigned_rate_limits": { 00:16:39.783 "rw_ios_per_sec": 0, 00:16:39.783 "rw_mbytes_per_sec": 0, 00:16:39.783 "r_mbytes_per_sec": 0, 00:16:39.783 "w_mbytes_per_sec": 0 00:16:39.783 }, 00:16:39.783 "claimed": true, 00:16:39.783 "claim_type": "exclusive_write", 00:16:39.783 "zoned": false, 00:16:39.783 "supported_io_types": { 00:16:39.783 "read": true, 00:16:39.783 "write": true, 00:16:39.783 "unmap": true, 00:16:39.783 "write_zeroes": true, 00:16:39.783 "flush": true, 00:16:39.783 "reset": true, 00:16:39.783 "compare": false, 00:16:39.783 "compare_and_write": false, 00:16:39.783 "abort": true, 00:16:39.783 "nvme_admin": false, 00:16:39.783 "nvme_io": false 00:16:39.783 }, 00:16:39.783 "memory_domains": [ 00:16:39.783 { 00:16:39.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.783 "dma_device_type": 2 00:16:39.783 } 00:16:39.783 ], 00:16:39.783 "driver_specific": {} 00:16:39.783 } 00:16:39.783 ] 00:16:40.042 14:18:31 -- common/autotest_common.sh@905 -- # return 0 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.042 14:18:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.042 14:18:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.042 "name": "Existed_Raid", 00:16:40.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.042 "strip_size_kb": 64, 00:16:40.042 "state": "configuring", 00:16:40.042 "raid_level": "concat", 00:16:40.042 "superblock": false, 00:16:40.042 "num_base_bdevs": 4, 00:16:40.042 "num_base_bdevs_discovered": 3, 00:16:40.042 "num_base_bdevs_operational": 4, 00:16:40.042 "base_bdevs_list": [ 00:16:40.042 { 00:16:40.042 "name": "BaseBdev1", 00:16:40.042 "uuid": "57376fc0-6420-45ef-b1e5-6b96fd85ce89", 00:16:40.042 "is_configured": true, 00:16:40.042 "data_offset": 0, 00:16:40.042 "data_size": 65536 00:16:40.042 }, 00:16:40.042 { 00:16:40.042 "name": "BaseBdev2", 00:16:40.042 "uuid": "8b2bb84e-ce3e-437c-9a03-61602d542520", 00:16:40.042 "is_configured": true, 00:16:40.042 "data_offset": 0, 00:16:40.042 "data_size": 65536 00:16:40.042 }, 00:16:40.042 { 00:16:40.042 "name": "BaseBdev3", 00:16:40.042 "uuid": "a09b8903-4ead-423d-a4e2-6a732a8382ea", 00:16:40.042 "is_configured": true, 00:16:40.042 "data_offset": 0, 00:16:40.042 "data_size": 65536 00:16:40.042 }, 00:16:40.042 { 00:16:40.042 "name": "BaseBdev4", 00:16:40.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.042 "is_configured": false, 00:16:40.042 "data_offset": 0, 00:16:40.042 "data_size": 0 00:16:40.042 } 00:16:40.042 ] 00:16:40.042 }' 00:16:40.042 14:18:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.042 14:18:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.979 14:18:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:40.979 [2024-11-18 14:18:32.933335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.979 [2024-11-18 14:18:32.933383] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:40.979 [2024-11-18 14:18:32.933392] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:40.979 [2024-11-18 14:18:32.933524] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:40.979 [2024-11-18 14:18:32.933934] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:40.979 [2024-11-18 14:18:32.933958] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:40.979 [2024-11-18 14:18:32.934235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.979 BaseBdev4 00:16:40.979 14:18:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:40.979 14:18:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:40.979 14:18:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:40.979 14:18:32 -- common/autotest_common.sh@899 -- # local i 00:16:40.979 14:18:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:40.979 14:18:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:40.979 14:18:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.238 14:18:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:41.499 [ 00:16:41.499 { 00:16:41.499 "name": "BaseBdev4", 00:16:41.499 "aliases": [ 00:16:41.499 "fff0071b-0fa8-4866-8790-c6634ef4078b" 00:16:41.499 ], 00:16:41.499 "product_name": "Malloc disk", 00:16:41.499 "block_size": 512, 00:16:41.499 "num_blocks": 65536, 00:16:41.499 "uuid": "fff0071b-0fa8-4866-8790-c6634ef4078b", 00:16:41.499 "assigned_rate_limits": { 00:16:41.499 "rw_ios_per_sec": 0, 00:16:41.499 "rw_mbytes_per_sec": 0, 00:16:41.499 "r_mbytes_per_sec": 0, 00:16:41.499 "w_mbytes_per_sec": 0 00:16:41.499 }, 00:16:41.499 "claimed": true, 00:16:41.499 "claim_type": "exclusive_write", 00:16:41.499 "zoned": false, 00:16:41.499 "supported_io_types": { 00:16:41.499 "read": true, 00:16:41.499 "write": true, 00:16:41.499 "unmap": true, 00:16:41.499 "write_zeroes": true, 00:16:41.499 "flush": true, 00:16:41.499 "reset": true, 00:16:41.499 "compare": false, 00:16:41.499 "compare_and_write": false, 00:16:41.499 "abort": true, 00:16:41.499 "nvme_admin": false, 00:16:41.499 "nvme_io": false 00:16:41.499 }, 00:16:41.499 "memory_domains": [ 00:16:41.499 { 00:16:41.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.499 "dma_device_type": 2 00:16:41.499 } 00:16:41.499 ], 00:16:41.499 "driver_specific": {} 00:16:41.499 } 00:16:41.499 ] 00:16:41.499 14:18:33 -- common/autotest_common.sh@905 -- # return 0 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.499 14:18:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.757 14:18:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.757 "name": "Existed_Raid", 00:16:41.757 "uuid": "835c738f-c495-4954-892a-38cd41513af6", 00:16:41.757 "strip_size_kb": 64, 00:16:41.757 "state": "online", 00:16:41.757 "raid_level": "concat", 00:16:41.757 "superblock": false, 00:16:41.757 "num_base_bdevs": 4, 00:16:41.757 "num_base_bdevs_discovered": 4, 00:16:41.757 "num_base_bdevs_operational": 4, 00:16:41.757 "base_bdevs_list": [ 00:16:41.757 { 00:16:41.757 "name": "BaseBdev1", 00:16:41.757 "uuid": "57376fc0-6420-45ef-b1e5-6b96fd85ce89", 00:16:41.757 "is_configured": true, 00:16:41.757 "data_offset": 0, 00:16:41.757 "data_size": 65536 00:16:41.757 }, 00:16:41.757 { 00:16:41.757 "name": "BaseBdev2", 00:16:41.757 "uuid": "8b2bb84e-ce3e-437c-9a03-61602d542520", 00:16:41.757 "is_configured": true, 00:16:41.757 "data_offset": 0, 00:16:41.757 "data_size": 65536 00:16:41.757 }, 00:16:41.757 { 00:16:41.757 "name": "BaseBdev3", 00:16:41.757 "uuid": "a09b8903-4ead-423d-a4e2-6a732a8382ea", 00:16:41.757 "is_configured": true, 00:16:41.757 "data_offset": 0, 00:16:41.757 "data_size": 65536 00:16:41.757 }, 00:16:41.757 { 00:16:41.757 "name": "BaseBdev4", 00:16:41.757 "uuid": "fff0071b-0fa8-4866-8790-c6634ef4078b", 00:16:41.757 "is_configured": true, 00:16:41.757 "data_offset": 0, 00:16:41.757 "data_size": 65536 00:16:41.757 } 00:16:41.757 ] 00:16:41.757 }' 00:16:41.757 14:18:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.757 14:18:33 -- common/autotest_common.sh@10 -- # set +x 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.324 [2024-11-18 14:18:34.369869] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.324 [2024-11-18 14:18:34.369898] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.324 [2024-11-18 14:18:34.369998] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.324 14:18:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.583 14:18:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.583 14:18:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.583 14:18:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.583 "name": "Existed_Raid", 00:16:42.583 "uuid": "835c738f-c495-4954-892a-38cd41513af6", 00:16:42.583 "strip_size_kb": 64, 00:16:42.583 "state": "offline", 00:16:42.583 "raid_level": "concat", 00:16:42.583 "superblock": false, 00:16:42.583 "num_base_bdevs": 4, 00:16:42.583 "num_base_bdevs_discovered": 3, 00:16:42.583 "num_base_bdevs_operational": 3, 00:16:42.583 "base_bdevs_list": [ 00:16:42.583 { 00:16:42.583 "name": null, 00:16:42.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.583 "is_configured": false, 00:16:42.583 "data_offset": 0, 00:16:42.583 "data_size": 65536 00:16:42.583 }, 00:16:42.583 { 00:16:42.583 "name": "BaseBdev2", 00:16:42.583 "uuid": "8b2bb84e-ce3e-437c-9a03-61602d542520", 00:16:42.583 "is_configured": true, 00:16:42.583 "data_offset": 0, 00:16:42.583 "data_size": 65536 00:16:42.583 }, 00:16:42.583 { 00:16:42.583 "name": "BaseBdev3", 00:16:42.583 "uuid": "a09b8903-4ead-423d-a4e2-6a732a8382ea", 00:16:42.583 "is_configured": true, 00:16:42.583 "data_offset": 0, 00:16:42.583 "data_size": 65536 00:16:42.583 }, 00:16:42.583 { 00:16:42.583 "name": "BaseBdev4", 00:16:42.583 "uuid": "fff0071b-0fa8-4866-8790-c6634ef4078b", 00:16:42.583 "is_configured": true, 00:16:42.583 "data_offset": 0, 00:16:42.583 "data_size": 65536 00:16:42.583 } 00:16:42.583 ] 00:16:42.583 }' 00:16:42.583 14:18:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.583 14:18:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.519 14:18:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:43.778 [2024-11-18 14:18:35.724661] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:43.778 14:18:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:43.778 14:18:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:43.778 14:18:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.778 14:18:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:44.037 14:18:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:44.037 14:18:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.037 14:18:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:44.296 [2024-11-18 14:18:36.234394] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:44.296 14:18:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:44.296 14:18:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.296 14:18:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.296 14:18:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:44.555 14:18:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:44.555 14:18:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:44.555 14:18:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:44.813 [2024-11-18 14:18:36.764318] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:44.814 [2024-11-18 14:18:36.764369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:44.814 14:18:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:44.814 14:18:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:44.814 14:18:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.814 14:18:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:45.073 14:18:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:45.073 14:18:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:45.073 14:18:36 -- bdev/bdev_raid.sh@287 -- # killprocess 129723 00:16:45.073 14:18:36 -- common/autotest_common.sh@936 -- # '[' -z 129723 ']' 00:16:45.073 14:18:36 -- common/autotest_common.sh@940 -- # kill -0 129723 00:16:45.073 14:18:36 -- common/autotest_common.sh@941 -- # uname 00:16:45.073 14:18:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.073 14:18:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129723 00:16:45.073 killing process with pid 129723 00:16:45.073 14:18:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.073 14:18:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.073 14:18:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129723' 00:16:45.073 14:18:36 -- common/autotest_common.sh@955 -- # kill 129723 00:16:45.073 [2024-11-18 14:18:37.003982] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.073 14:18:36 -- common/autotest_common.sh@960 -- # wait 129723 00:16:45.073 [2024-11-18 14:18:37.004055] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.331 ************************************ 00:16:45.331 END TEST raid_state_function_test 00:16:45.331 ************************************ 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:45.331 00:16:45.331 real 0m12.969s 00:16:45.331 user 0m23.993s 00:16:45.331 sys 0m1.650s 00:16:45.331 14:18:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:45.331 14:18:37 -- common/autotest_common.sh@10 -- # set +x 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:45.331 14:18:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:45.331 14:18:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.331 14:18:37 -- common/autotest_common.sh@10 -- # set +x 00:16:45.331 ************************************ 00:16:45.331 START TEST raid_state_function_test_sb 00:16:45.331 ************************************ 00:16:45.331 14:18:37 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=130150 00:16:45.331 Process raid pid: 130150 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130150' 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:45.331 14:18:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130150 /var/tmp/spdk-raid.sock 00:16:45.331 14:18:37 -- common/autotest_common.sh@829 -- # '[' -z 130150 ']' 00:16:45.331 14:18:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.331 14:18:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.331 14:18:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.331 14:18:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.331 14:18:37 -- common/autotest_common.sh@10 -- # set +x 00:16:45.331 [2024-11-18 14:18:37.341473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:45.331 [2024-11-18 14:18:37.341702] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.590 [2024-11-18 14:18:37.488727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.590 [2024-11-18 14:18:37.552604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.590 [2024-11-18 14:18:37.604659] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.533 14:18:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.533 14:18:38 -- common/autotest_common.sh@862 -- # return 0 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:46.533 [2024-11-18 14:18:38.472835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.533 [2024-11-18 14:18:38.472908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.533 [2024-11-18 14:18:38.472922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.533 [2024-11-18 14:18:38.472939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.533 [2024-11-18 14:18:38.472946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:46.533 [2024-11-18 14:18:38.472984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:46.533 [2024-11-18 14:18:38.472993] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:46.533 [2024-11-18 14:18:38.473017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.533 14:18:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.790 14:18:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.790 "name": "Existed_Raid", 00:16:46.790 "uuid": "ff86d413-e650-407e-97ef-9f4f58f54c36", 00:16:46.790 "strip_size_kb": 64, 00:16:46.790 "state": "configuring", 00:16:46.790 "raid_level": "concat", 00:16:46.790 "superblock": true, 00:16:46.790 "num_base_bdevs": 4, 00:16:46.790 "num_base_bdevs_discovered": 0, 00:16:46.790 "num_base_bdevs_operational": 4, 00:16:46.790 "base_bdevs_list": [ 00:16:46.790 { 00:16:46.790 "name": "BaseBdev1", 00:16:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.790 "is_configured": false, 00:16:46.790 "data_offset": 0, 00:16:46.790 "data_size": 0 00:16:46.790 }, 00:16:46.790 { 00:16:46.790 "name": "BaseBdev2", 00:16:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.790 "is_configured": false, 00:16:46.790 "data_offset": 0, 00:16:46.790 "data_size": 0 00:16:46.790 }, 00:16:46.790 { 00:16:46.790 "name": "BaseBdev3", 00:16:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.790 "is_configured": false, 00:16:46.790 "data_offset": 0, 00:16:46.790 "data_size": 0 00:16:46.790 }, 00:16:46.790 { 00:16:46.790 "name": "BaseBdev4", 00:16:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.790 "is_configured": false, 00:16:46.790 "data_offset": 0, 00:16:46.790 "data_size": 0 00:16:46.790 } 00:16:46.790 ] 00:16:46.790 }' 00:16:46.790 14:18:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.790 14:18:38 -- common/autotest_common.sh@10 -- # set +x 00:16:47.356 14:18:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.614 [2024-11-18 14:18:39.508849] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.614 [2024-11-18 14:18:39.508892] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:47.614 14:18:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:47.874 [2024-11-18 14:18:39.748939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:47.874 [2024-11-18 14:18:39.748995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:47.874 [2024-11-18 14:18:39.749009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.874 [2024-11-18 14:18:39.749039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.874 [2024-11-18 14:18:39.749049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.874 [2024-11-18 14:18:39.749069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.874 [2024-11-18 14:18:39.749078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:47.874 [2024-11-18 14:18:39.749106] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:47.874 14:18:39 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:47.874 [2024-11-18 14:18:39.946795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.874 BaseBdev1 00:16:48.132 14:18:39 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:48.132 14:18:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:48.132 14:18:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.132 14:18:39 -- common/autotest_common.sh@899 -- # local i 00:16:48.132 14:18:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.132 14:18:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.132 14:18:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.132 14:18:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.391 [ 00:16:48.391 { 00:16:48.391 "name": "BaseBdev1", 00:16:48.391 "aliases": [ 00:16:48.391 "1ca82c20-cdb8-4155-9931-438eac9636c9" 00:16:48.391 ], 00:16:48.391 "product_name": "Malloc disk", 00:16:48.391 "block_size": 512, 00:16:48.391 "num_blocks": 65536, 00:16:48.391 "uuid": "1ca82c20-cdb8-4155-9931-438eac9636c9", 00:16:48.391 "assigned_rate_limits": { 00:16:48.391 "rw_ios_per_sec": 0, 00:16:48.391 "rw_mbytes_per_sec": 0, 00:16:48.391 "r_mbytes_per_sec": 0, 00:16:48.391 "w_mbytes_per_sec": 0 00:16:48.391 }, 00:16:48.391 "claimed": true, 00:16:48.391 "claim_type": "exclusive_write", 00:16:48.391 "zoned": false, 00:16:48.391 "supported_io_types": { 00:16:48.391 "read": true, 00:16:48.391 "write": true, 00:16:48.391 "unmap": true, 00:16:48.391 "write_zeroes": true, 00:16:48.391 "flush": true, 00:16:48.391 "reset": true, 00:16:48.391 "compare": false, 00:16:48.391 "compare_and_write": false, 00:16:48.391 "abort": true, 00:16:48.391 "nvme_admin": false, 00:16:48.391 "nvme_io": false 00:16:48.391 }, 00:16:48.391 "memory_domains": [ 00:16:48.391 { 00:16:48.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.391 "dma_device_type": 2 00:16:48.391 } 00:16:48.391 ], 00:16:48.391 "driver_specific": {} 00:16:48.391 } 00:16:48.391 ] 00:16:48.391 14:18:40 -- common/autotest_common.sh@905 -- # return 0 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.391 14:18:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.650 14:18:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.650 "name": "Existed_Raid", 00:16:48.650 "uuid": "6c39c7ba-2576-48d2-874a-35f6c62f0d41", 00:16:48.650 "strip_size_kb": 64, 00:16:48.650 "state": "configuring", 00:16:48.650 "raid_level": "concat", 00:16:48.650 "superblock": true, 00:16:48.650 "num_base_bdevs": 4, 00:16:48.650 "num_base_bdevs_discovered": 1, 00:16:48.650 "num_base_bdevs_operational": 4, 00:16:48.650 "base_bdevs_list": [ 00:16:48.650 { 00:16:48.650 "name": "BaseBdev1", 00:16:48.650 "uuid": "1ca82c20-cdb8-4155-9931-438eac9636c9", 00:16:48.650 "is_configured": true, 00:16:48.650 "data_offset": 2048, 00:16:48.650 "data_size": 63488 00:16:48.650 }, 00:16:48.650 { 00:16:48.650 "name": "BaseBdev2", 00:16:48.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.650 "is_configured": false, 00:16:48.650 "data_offset": 0, 00:16:48.650 "data_size": 0 00:16:48.650 }, 00:16:48.650 { 00:16:48.650 "name": "BaseBdev3", 00:16:48.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.650 "is_configured": false, 00:16:48.650 "data_offset": 0, 00:16:48.650 "data_size": 0 00:16:48.650 }, 00:16:48.650 { 00:16:48.650 "name": "BaseBdev4", 00:16:48.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.650 "is_configured": false, 00:16:48.650 "data_offset": 0, 00:16:48.650 "data_size": 0 00:16:48.650 } 00:16:48.650 ] 00:16:48.650 }' 00:16:48.650 14:18:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.650 14:18:40 -- common/autotest_common.sh@10 -- # set +x 00:16:49.216 14:18:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.474 [2024-11-18 14:18:41.415033] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.474 [2024-11-18 14:18:41.415081] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:49.474 14:18:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:49.474 14:18:41 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:49.732 14:18:41 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.991 BaseBdev1 00:16:49.991 14:18:41 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:49.991 14:18:41 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:49.991 14:18:41 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:49.991 14:18:41 -- common/autotest_common.sh@899 -- # local i 00:16:49.991 14:18:41 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:49.991 14:18:41 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:49.991 14:18:41 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:50.249 14:18:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.508 [ 00:16:50.508 { 00:16:50.508 "name": "BaseBdev1", 00:16:50.508 "aliases": [ 00:16:50.508 "45f7d3aa-9151-422f-859c-2230a6ac0071" 00:16:50.508 ], 00:16:50.508 "product_name": "Malloc disk", 00:16:50.508 "block_size": 512, 00:16:50.508 "num_blocks": 65536, 00:16:50.508 "uuid": "45f7d3aa-9151-422f-859c-2230a6ac0071", 00:16:50.508 "assigned_rate_limits": { 00:16:50.508 "rw_ios_per_sec": 0, 00:16:50.508 "rw_mbytes_per_sec": 0, 00:16:50.508 "r_mbytes_per_sec": 0, 00:16:50.508 "w_mbytes_per_sec": 0 00:16:50.508 }, 00:16:50.508 "claimed": false, 00:16:50.508 "zoned": false, 00:16:50.508 "supported_io_types": { 00:16:50.508 "read": true, 00:16:50.508 "write": true, 00:16:50.508 "unmap": true, 00:16:50.508 "write_zeroes": true, 00:16:50.508 "flush": true, 00:16:50.508 "reset": true, 00:16:50.508 "compare": false, 00:16:50.508 "compare_and_write": false, 00:16:50.508 "abort": true, 00:16:50.508 "nvme_admin": false, 00:16:50.508 "nvme_io": false 00:16:50.508 }, 00:16:50.508 "memory_domains": [ 00:16:50.508 { 00:16:50.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.508 "dma_device_type": 2 00:16:50.508 } 00:16:50.508 ], 00:16:50.508 "driver_specific": {} 00:16:50.508 } 00:16:50.508 ] 00:16:50.508 14:18:42 -- common/autotest_common.sh@905 -- # return 0 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:50.508 [2024-11-18 14:18:42.519090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.508 [2024-11-18 14:18:42.521076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.508 [2024-11-18 14:18:42.521158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.508 [2024-11-18 14:18:42.521173] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.508 [2024-11-18 14:18:42.521201] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.508 [2024-11-18 14:18:42.521212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:50.508 [2024-11-18 14:18:42.521232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:50.508 14:18:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.509 14:18:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.816 14:18:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.816 "name": "Existed_Raid", 00:16:50.816 "uuid": "f80e6616-a3a6-494e-8ae6-b4537f1d4cc9", 00:16:50.816 "strip_size_kb": 64, 00:16:50.816 "state": "configuring", 00:16:50.816 "raid_level": "concat", 00:16:50.816 "superblock": true, 00:16:50.816 "num_base_bdevs": 4, 00:16:50.816 "num_base_bdevs_discovered": 1, 00:16:50.816 "num_base_bdevs_operational": 4, 00:16:50.816 "base_bdevs_list": [ 00:16:50.816 { 00:16:50.816 "name": "BaseBdev1", 00:16:50.816 "uuid": "45f7d3aa-9151-422f-859c-2230a6ac0071", 00:16:50.816 "is_configured": true, 00:16:50.816 "data_offset": 2048, 00:16:50.816 "data_size": 63488 00:16:50.816 }, 00:16:50.816 { 00:16:50.816 "name": "BaseBdev2", 00:16:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.816 "is_configured": false, 00:16:50.816 "data_offset": 0, 00:16:50.816 "data_size": 0 00:16:50.816 }, 00:16:50.816 { 00:16:50.816 "name": "BaseBdev3", 00:16:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.816 "is_configured": false, 00:16:50.816 "data_offset": 0, 00:16:50.816 "data_size": 0 00:16:50.816 }, 00:16:50.816 { 00:16:50.816 "name": "BaseBdev4", 00:16:50.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.816 "is_configured": false, 00:16:50.816 "data_offset": 0, 00:16:50.816 "data_size": 0 00:16:50.816 } 00:16:50.816 ] 00:16:50.816 }' 00:16:50.816 14:18:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.816 14:18:42 -- common/autotest_common.sh@10 -- # set +x 00:16:51.398 14:18:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:51.656 [2024-11-18 14:18:43.475873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.656 BaseBdev2 00:16:51.656 14:18:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:51.656 14:18:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:51.656 14:18:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.656 14:18:43 -- common/autotest_common.sh@899 -- # local i 00:16:51.656 14:18:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.656 14:18:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.656 14:18:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.915 14:18:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:51.915 [ 00:16:51.915 { 00:16:51.915 "name": "BaseBdev2", 00:16:51.915 "aliases": [ 00:16:51.915 "c03a2fcd-863d-491d-853f-c0ca44908d5b" 00:16:51.915 ], 00:16:51.915 "product_name": "Malloc disk", 00:16:51.915 "block_size": 512, 00:16:51.915 "num_blocks": 65536, 00:16:51.915 "uuid": "c03a2fcd-863d-491d-853f-c0ca44908d5b", 00:16:51.915 "assigned_rate_limits": { 00:16:51.915 "rw_ios_per_sec": 0, 00:16:51.915 "rw_mbytes_per_sec": 0, 00:16:51.915 "r_mbytes_per_sec": 0, 00:16:51.915 "w_mbytes_per_sec": 0 00:16:51.915 }, 00:16:51.915 "claimed": true, 00:16:51.915 "claim_type": "exclusive_write", 00:16:51.915 "zoned": false, 00:16:51.915 "supported_io_types": { 00:16:51.915 "read": true, 00:16:51.915 "write": true, 00:16:51.915 "unmap": true, 00:16:51.915 "write_zeroes": true, 00:16:51.915 "flush": true, 00:16:51.915 "reset": true, 00:16:51.915 "compare": false, 00:16:51.915 "compare_and_write": false, 00:16:51.915 "abort": true, 00:16:51.915 "nvme_admin": false, 00:16:51.915 "nvme_io": false 00:16:51.915 }, 00:16:51.915 "memory_domains": [ 00:16:51.915 { 00:16:51.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.915 "dma_device_type": 2 00:16:51.915 } 00:16:51.915 ], 00:16:51.915 "driver_specific": {} 00:16:51.915 } 00:16:51.915 ] 00:16:51.915 14:18:43 -- common/autotest_common.sh@905 -- # return 0 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.915 14:18:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.174 14:18:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.174 "name": "Existed_Raid", 00:16:52.174 "uuid": "f80e6616-a3a6-494e-8ae6-b4537f1d4cc9", 00:16:52.174 "strip_size_kb": 64, 00:16:52.174 "state": "configuring", 00:16:52.174 "raid_level": "concat", 00:16:52.174 "superblock": true, 00:16:52.174 "num_base_bdevs": 4, 00:16:52.174 "num_base_bdevs_discovered": 2, 00:16:52.174 "num_base_bdevs_operational": 4, 00:16:52.174 "base_bdevs_list": [ 00:16:52.174 { 00:16:52.174 "name": "BaseBdev1", 00:16:52.174 "uuid": "45f7d3aa-9151-422f-859c-2230a6ac0071", 00:16:52.174 "is_configured": true, 00:16:52.174 "data_offset": 2048, 00:16:52.174 "data_size": 63488 00:16:52.174 }, 00:16:52.174 { 00:16:52.174 "name": "BaseBdev2", 00:16:52.174 "uuid": "c03a2fcd-863d-491d-853f-c0ca44908d5b", 00:16:52.174 "is_configured": true, 00:16:52.174 "data_offset": 2048, 00:16:52.174 "data_size": 63488 00:16:52.174 }, 00:16:52.174 { 00:16:52.174 "name": "BaseBdev3", 00:16:52.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.174 "is_configured": false, 00:16:52.174 "data_offset": 0, 00:16:52.174 "data_size": 0 00:16:52.174 }, 00:16:52.174 { 00:16:52.174 "name": "BaseBdev4", 00:16:52.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.174 "is_configured": false, 00:16:52.174 "data_offset": 0, 00:16:52.174 "data_size": 0 00:16:52.174 } 00:16:52.174 ] 00:16:52.174 }' 00:16:52.174 14:18:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.174 14:18:44 -- common/autotest_common.sh@10 -- # set +x 00:16:53.109 14:18:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.109 [2024-11-18 14:18:45.084075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.109 BaseBdev3 00:16:53.109 14:18:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:53.109 14:18:45 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:53.109 14:18:45 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:53.109 14:18:45 -- common/autotest_common.sh@899 -- # local i 00:16:53.109 14:18:45 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:53.109 14:18:45 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:53.109 14:18:45 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.369 14:18:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.628 [ 00:16:53.628 { 00:16:53.628 "name": "BaseBdev3", 00:16:53.628 "aliases": [ 00:16:53.628 "25ddd1df-f66b-48a9-adc6-6f973c58d692" 00:16:53.628 ], 00:16:53.628 "product_name": "Malloc disk", 00:16:53.628 "block_size": 512, 00:16:53.628 "num_blocks": 65536, 00:16:53.628 "uuid": "25ddd1df-f66b-48a9-adc6-6f973c58d692", 00:16:53.628 "assigned_rate_limits": { 00:16:53.628 "rw_ios_per_sec": 0, 00:16:53.628 "rw_mbytes_per_sec": 0, 00:16:53.628 "r_mbytes_per_sec": 0, 00:16:53.628 "w_mbytes_per_sec": 0 00:16:53.628 }, 00:16:53.628 "claimed": true, 00:16:53.628 "claim_type": "exclusive_write", 00:16:53.628 "zoned": false, 00:16:53.628 "supported_io_types": { 00:16:53.628 "read": true, 00:16:53.628 "write": true, 00:16:53.628 "unmap": true, 00:16:53.628 "write_zeroes": true, 00:16:53.628 "flush": true, 00:16:53.628 "reset": true, 00:16:53.628 "compare": false, 00:16:53.628 "compare_and_write": false, 00:16:53.628 "abort": true, 00:16:53.628 "nvme_admin": false, 00:16:53.628 "nvme_io": false 00:16:53.628 }, 00:16:53.628 "memory_domains": [ 00:16:53.628 { 00:16:53.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.628 "dma_device_type": 2 00:16:53.628 } 00:16:53.628 ], 00:16:53.628 "driver_specific": {} 00:16:53.628 } 00:16:53.628 ] 00:16:53.628 14:18:45 -- common/autotest_common.sh@905 -- # return 0 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.628 14:18:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.886 14:18:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.886 "name": "Existed_Raid", 00:16:53.886 "uuid": "f80e6616-a3a6-494e-8ae6-b4537f1d4cc9", 00:16:53.886 "strip_size_kb": 64, 00:16:53.886 "state": "configuring", 00:16:53.886 "raid_level": "concat", 00:16:53.886 "superblock": true, 00:16:53.886 "num_base_bdevs": 4, 00:16:53.886 "num_base_bdevs_discovered": 3, 00:16:53.886 "num_base_bdevs_operational": 4, 00:16:53.887 "base_bdevs_list": [ 00:16:53.887 { 00:16:53.887 "name": "BaseBdev1", 00:16:53.887 "uuid": "45f7d3aa-9151-422f-859c-2230a6ac0071", 00:16:53.887 "is_configured": true, 00:16:53.887 "data_offset": 2048, 00:16:53.887 "data_size": 63488 00:16:53.887 }, 00:16:53.887 { 00:16:53.887 "name": "BaseBdev2", 00:16:53.887 "uuid": "c03a2fcd-863d-491d-853f-c0ca44908d5b", 00:16:53.887 "is_configured": true, 00:16:53.887 "data_offset": 2048, 00:16:53.887 "data_size": 63488 00:16:53.887 }, 00:16:53.887 { 00:16:53.887 "name": "BaseBdev3", 00:16:53.887 "uuid": "25ddd1df-f66b-48a9-adc6-6f973c58d692", 00:16:53.887 "is_configured": true, 00:16:53.887 "data_offset": 2048, 00:16:53.887 "data_size": 63488 00:16:53.887 }, 00:16:53.887 { 00:16:53.887 "name": "BaseBdev4", 00:16:53.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.887 "is_configured": false, 00:16:53.887 "data_offset": 0, 00:16:53.887 "data_size": 0 00:16:53.887 } 00:16:53.887 ] 00:16:53.887 }' 00:16:53.887 14:18:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.887 14:18:45 -- common/autotest_common.sh@10 -- # set +x 00:16:54.455 14:18:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:54.714 [2024-11-18 14:18:46.572043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.714 [2024-11-18 14:18:46.572263] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:54.714 [2024-11-18 14:18:46.572279] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:54.714 [2024-11-18 14:18:46.572449] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:16:54.714 BaseBdev4 00:16:54.714 [2024-11-18 14:18:46.572869] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:54.714 [2024-11-18 14:18:46.572894] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:54.714 [2024-11-18 14:18:46.573053] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.714 14:18:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:54.715 14:18:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:54.715 14:18:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:54.715 14:18:46 -- common/autotest_common.sh@899 -- # local i 00:16:54.715 14:18:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:54.715 14:18:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:54.715 14:18:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.973 14:18:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:54.973 [ 00:16:54.973 { 00:16:54.973 "name": "BaseBdev4", 00:16:54.973 "aliases": [ 00:16:54.973 "453f8e69-ebce-47d3-8e41-aad1fd6afe88" 00:16:54.973 ], 00:16:54.973 "product_name": "Malloc disk", 00:16:54.973 "block_size": 512, 00:16:54.973 "num_blocks": 65536, 00:16:54.973 "uuid": "453f8e69-ebce-47d3-8e41-aad1fd6afe88", 00:16:54.973 "assigned_rate_limits": { 00:16:54.973 "rw_ios_per_sec": 0, 00:16:54.973 "rw_mbytes_per_sec": 0, 00:16:54.973 "r_mbytes_per_sec": 0, 00:16:54.973 "w_mbytes_per_sec": 0 00:16:54.973 }, 00:16:54.973 "claimed": true, 00:16:54.973 "claim_type": "exclusive_write", 00:16:54.973 "zoned": false, 00:16:54.973 "supported_io_types": { 00:16:54.973 "read": true, 00:16:54.973 "write": true, 00:16:54.973 "unmap": true, 00:16:54.973 "write_zeroes": true, 00:16:54.973 "flush": true, 00:16:54.973 "reset": true, 00:16:54.973 "compare": false, 00:16:54.973 "compare_and_write": false, 00:16:54.973 "abort": true, 00:16:54.973 "nvme_admin": false, 00:16:54.973 "nvme_io": false 00:16:54.973 }, 00:16:54.973 "memory_domains": [ 00:16:54.973 { 00:16:54.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.973 "dma_device_type": 2 00:16:54.973 } 00:16:54.974 ], 00:16:54.974 "driver_specific": {} 00:16:54.974 } 00:16:54.974 ] 00:16:54.974 14:18:47 -- common/autotest_common.sh@905 -- # return 0 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.974 14:18:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.233 14:18:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.233 "name": "Existed_Raid", 00:16:55.233 "uuid": "f80e6616-a3a6-494e-8ae6-b4537f1d4cc9", 00:16:55.233 "strip_size_kb": 64, 00:16:55.233 "state": "online", 00:16:55.233 "raid_level": "concat", 00:16:55.233 "superblock": true, 00:16:55.233 "num_base_bdevs": 4, 00:16:55.233 "num_base_bdevs_discovered": 4, 00:16:55.233 "num_base_bdevs_operational": 4, 00:16:55.233 "base_bdevs_list": [ 00:16:55.233 { 00:16:55.233 "name": "BaseBdev1", 00:16:55.233 "uuid": "45f7d3aa-9151-422f-859c-2230a6ac0071", 00:16:55.233 "is_configured": true, 00:16:55.233 "data_offset": 2048, 00:16:55.233 "data_size": 63488 00:16:55.233 }, 00:16:55.233 { 00:16:55.233 "name": "BaseBdev2", 00:16:55.233 "uuid": "c03a2fcd-863d-491d-853f-c0ca44908d5b", 00:16:55.233 "is_configured": true, 00:16:55.233 "data_offset": 2048, 00:16:55.233 "data_size": 63488 00:16:55.233 }, 00:16:55.233 { 00:16:55.233 "name": "BaseBdev3", 00:16:55.233 "uuid": "25ddd1df-f66b-48a9-adc6-6f973c58d692", 00:16:55.233 "is_configured": true, 00:16:55.233 "data_offset": 2048, 00:16:55.233 "data_size": 63488 00:16:55.233 }, 00:16:55.233 { 00:16:55.233 "name": "BaseBdev4", 00:16:55.233 "uuid": "453f8e69-ebce-47d3-8e41-aad1fd6afe88", 00:16:55.233 "is_configured": true, 00:16:55.233 "data_offset": 2048, 00:16:55.233 "data_size": 63488 00:16:55.233 } 00:16:55.233 ] 00:16:55.233 }' 00:16:55.233 14:18:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.233 14:18:47 -- common/autotest_common.sh@10 -- # set +x 00:16:55.802 14:18:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:56.061 [2024-11-18 14:18:47.960372] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.061 [2024-11-18 14:18:47.960401] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.061 [2024-11-18 14:18:47.960488] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.061 14:18:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.320 14:18:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.320 "name": "Existed_Raid", 00:16:56.320 "uuid": "f80e6616-a3a6-494e-8ae6-b4537f1d4cc9", 00:16:56.320 "strip_size_kb": 64, 00:16:56.321 "state": "offline", 00:16:56.321 "raid_level": "concat", 00:16:56.321 "superblock": true, 00:16:56.321 "num_base_bdevs": 4, 00:16:56.321 "num_base_bdevs_discovered": 3, 00:16:56.321 "num_base_bdevs_operational": 3, 00:16:56.321 "base_bdevs_list": [ 00:16:56.321 { 00:16:56.321 "name": null, 00:16:56.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.321 "is_configured": false, 00:16:56.321 "data_offset": 2048, 00:16:56.321 "data_size": 63488 00:16:56.321 }, 00:16:56.321 { 00:16:56.321 "name": "BaseBdev2", 00:16:56.321 "uuid": "c03a2fcd-863d-491d-853f-c0ca44908d5b", 00:16:56.321 "is_configured": true, 00:16:56.321 "data_offset": 2048, 00:16:56.321 "data_size": 63488 00:16:56.321 }, 00:16:56.321 { 00:16:56.321 "name": "BaseBdev3", 00:16:56.321 "uuid": "25ddd1df-f66b-48a9-adc6-6f973c58d692", 00:16:56.321 "is_configured": true, 00:16:56.321 "data_offset": 2048, 00:16:56.321 "data_size": 63488 00:16:56.321 }, 00:16:56.321 { 00:16:56.321 "name": "BaseBdev4", 00:16:56.321 "uuid": "453f8e69-ebce-47d3-8e41-aad1fd6afe88", 00:16:56.321 "is_configured": true, 00:16:56.321 "data_offset": 2048, 00:16:56.321 "data_size": 63488 00:16:56.321 } 00:16:56.321 ] 00:16:56.321 }' 00:16:56.321 14:18:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.321 14:18:48 -- common/autotest_common.sh@10 -- # set +x 00:16:56.888 14:18:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:56.888 14:18:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:56.888 14:18:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.888 14:18:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:57.147 14:18:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:57.147 14:18:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.147 14:18:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:57.405 [2024-11-18 14:18:49.296312] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:57.405 14:18:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:57.405 14:18:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:57.405 14:18:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.405 14:18:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:57.663 14:18:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:57.664 14:18:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.664 14:18:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:57.923 [2024-11-18 14:18:49.745548] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:57.923 14:18:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:58.182 [2024-11-18 14:18:50.186304] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:58.182 [2024-11-18 14:18:50.186361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:16:58.182 14:18:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:58.182 14:18:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:58.182 14:18:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.182 14:18:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:58.440 14:18:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:58.440 14:18:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:58.440 14:18:50 -- bdev/bdev_raid.sh@287 -- # killprocess 130150 00:16:58.440 14:18:50 -- common/autotest_common.sh@936 -- # '[' -z 130150 ']' 00:16:58.440 14:18:50 -- common/autotest_common.sh@940 -- # kill -0 130150 00:16:58.440 14:18:50 -- common/autotest_common.sh@941 -- # uname 00:16:58.440 14:18:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.440 14:18:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130150 00:16:58.440 killing process with pid 130150 00:16:58.440 14:18:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:58.440 14:18:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:58.440 14:18:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130150' 00:16:58.440 14:18:50 -- common/autotest_common.sh@955 -- # kill 130150 00:16:58.440 14:18:50 -- common/autotest_common.sh@960 -- # wait 130150 00:16:58.440 [2024-11-18 14:18:50.478287] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.440 [2024-11-18 14:18:50.478380] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:58.699 ************************************ 00:16:58.699 END TEST raid_state_function_test_sb 00:16:58.699 ************************************ 00:16:58.699 14:18:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:58.699 00:16:58.699 real 0m13.488s 00:16:58.699 user 0m25.026s 00:16:58.699 sys 0m1.592s 00:16:58.699 14:18:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:58.699 14:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:58.958 14:18:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:58.958 14:18:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.958 14:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:58.958 ************************************ 00:16:58.958 START TEST raid_superblock_test 00:16:58.958 ************************************ 00:16:58.958 14:18:50 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:58.958 14:18:50 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:58.959 14:18:50 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:58.959 14:18:50 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:58.959 14:18:50 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:58.959 14:18:50 -- bdev/bdev_raid.sh@357 -- # raid_pid=130584 00:16:58.959 14:18:50 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:58.959 14:18:50 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130584 /var/tmp/spdk-raid.sock 00:16:58.959 14:18:50 -- common/autotest_common.sh@829 -- # '[' -z 130584 ']' 00:16:58.959 14:18:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:58.959 14:18:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:58.959 14:18:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:58.959 14:18:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.959 14:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:58.959 [2024-11-18 14:18:50.873975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:58.959 [2024-11-18 14:18:50.874173] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130584 ] 00:16:58.959 [2024-11-18 14:18:51.013046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.217 [2024-11-18 14:18:51.085454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.217 [2024-11-18 14:18:51.155391] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.784 14:18:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.784 14:18:51 -- common/autotest_common.sh@862 -- # return 0 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:59.784 14:18:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:00.043 malloc1 00:17:00.043 14:18:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.301 [2024-11-18 14:18:52.232108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.301 [2024-11-18 14:18:52.232221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.301 [2024-11-18 14:18:52.232273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:00.301 [2024-11-18 14:18:52.232330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.301 [2024-11-18 14:18:52.234790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.301 [2024-11-18 14:18:52.234855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.301 pt1 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.301 14:18:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:00.560 malloc2 00:17:00.560 14:18:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.818 [2024-11-18 14:18:52.673601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.818 [2024-11-18 14:18:52.673669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.818 [2024-11-18 14:18:52.673716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:00.818 [2024-11-18 14:18:52.673764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.818 [2024-11-18 14:18:52.676016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.818 [2024-11-18 14:18:52.676071] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.818 pt2 00:17:00.818 14:18:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:00.818 14:18:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:00.818 14:18:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:00.818 14:18:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:00.818 14:18:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:00.818 14:18:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.819 14:18:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.819 14:18:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.819 14:18:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:01.077 malloc3 00:17:01.077 14:18:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.077 [2024-11-18 14:18:53.120582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.077 [2024-11-18 14:18:53.120648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.077 [2024-11-18 14:18:53.120691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.077 [2024-11-18 14:18:53.120740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.077 [2024-11-18 14:18:53.122993] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.077 [2024-11-18 14:18:53.123054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.077 pt3 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.077 14:18:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:01.336 malloc4 00:17:01.336 14:18:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:01.595 [2024-11-18 14:18:53.494112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:01.595 [2024-11-18 14:18:53.494203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.595 [2024-11-18 14:18:53.494241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.595 [2024-11-18 14:18:53.494287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.595 [2024-11-18 14:18:53.497081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.595 [2024-11-18 14:18:53.497146] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:01.595 pt4 00:17:01.595 14:18:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:01.595 14:18:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:01.595 14:18:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:01.853 [2024-11-18 14:18:53.678268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.853 [2024-11-18 14:18:53.680314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.853 [2024-11-18 14:18:53.680390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.853 [2024-11-18 14:18:53.680443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:01.853 [2024-11-18 14:18:53.680664] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:01.853 [2024-11-18 14:18:53.680689] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:01.854 [2024-11-18 14:18:53.680838] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:01.854 [2024-11-18 14:18:53.681269] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:01.854 [2024-11-18 14:18:53.681293] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:01.854 [2024-11-18 14:18:53.681454] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.854 "name": "raid_bdev1", 00:17:01.854 "uuid": "03d7430c-cd5f-4462-8080-a4bd2a178b18", 00:17:01.854 "strip_size_kb": 64, 00:17:01.854 "state": "online", 00:17:01.854 "raid_level": "concat", 00:17:01.854 "superblock": true, 00:17:01.854 "num_base_bdevs": 4, 00:17:01.854 "num_base_bdevs_discovered": 4, 00:17:01.854 "num_base_bdevs_operational": 4, 00:17:01.854 "base_bdevs_list": [ 00:17:01.854 { 00:17:01.854 "name": "pt1", 00:17:01.854 "uuid": "636bf174-5a25-523a-ba67-93caa9a70c47", 00:17:01.854 "is_configured": true, 00:17:01.854 "data_offset": 2048, 00:17:01.854 "data_size": 63488 00:17:01.854 }, 00:17:01.854 { 00:17:01.854 "name": "pt2", 00:17:01.854 "uuid": "118f0aa1-89a2-5451-bb69-41bf3f4e561c", 00:17:01.854 "is_configured": true, 00:17:01.854 "data_offset": 2048, 00:17:01.854 "data_size": 63488 00:17:01.854 }, 00:17:01.854 { 00:17:01.854 "name": "pt3", 00:17:01.854 "uuid": "4dbb3c78-3c73-5fdc-a1b0-77f6a46be6aa", 00:17:01.854 "is_configured": true, 00:17:01.854 "data_offset": 2048, 00:17:01.854 "data_size": 63488 00:17:01.854 }, 00:17:01.854 { 00:17:01.854 "name": "pt4", 00:17:01.854 "uuid": "a777a6f9-aad6-51de-8f16-86ec2e4ea3f3", 00:17:01.854 "is_configured": true, 00:17:01.854 "data_offset": 2048, 00:17:01.854 "data_size": 63488 00:17:01.854 } 00:17:01.854 ] 00:17:01.854 }' 00:17:01.854 14:18:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.854 14:18:53 -- common/autotest_common.sh@10 -- # set +x 00:17:02.420 14:18:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:02.420 14:18:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:02.679 [2024-11-18 14:18:54.650546] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.679 14:18:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=03d7430c-cd5f-4462-8080-a4bd2a178b18 00:17:02.679 14:18:54 -- bdev/bdev_raid.sh@380 -- # '[' -z 03d7430c-cd5f-4462-8080-a4bd2a178b18 ']' 00:17:02.679 14:18:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:02.937 [2024-11-18 14:18:54.898382] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.937 [2024-11-18 14:18:54.898408] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.937 [2024-11-18 14:18:54.898496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.937 [2024-11-18 14:18:54.898571] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.937 [2024-11-18 14:18:54.898585] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:02.937 14:18:54 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.937 14:18:54 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:03.196 14:18:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:03.196 14:18:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:03.196 14:18:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:03.196 14:18:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:03.455 14:18:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:03.455 14:18:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:03.455 14:18:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:03.455 14:18:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:03.713 14:18:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:03.713 14:18:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:03.972 14:18:55 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:03.972 14:18:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:04.230 14:18:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:04.230 14:18:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:04.230 14:18:56 -- common/autotest_common.sh@650 -- # local es=0 00:17:04.230 14:18:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:04.230 14:18:56 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.230 14:18:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.230 14:18:56 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.230 14:18:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.230 14:18:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.230 14:18:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:04.230 14:18:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:04.230 14:18:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:04.230 14:18:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:04.230 [2024-11-18 14:18:56.266571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:04.230 [2024-11-18 14:18:56.268552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:04.230 [2024-11-18 14:18:56.268609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:04.230 [2024-11-18 14:18:56.268649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:04.230 [2024-11-18 14:18:56.268694] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:04.230 [2024-11-18 14:18:56.268769] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:04.230 [2024-11-18 14:18:56.268826] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:04.230 [2024-11-18 14:18:56.268910] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:04.230 [2024-11-18 14:18:56.268962] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.230 [2024-11-18 14:18:56.268974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:17:04.230 request: 00:17:04.230 { 00:17:04.230 "name": "raid_bdev1", 00:17:04.230 "raid_level": "concat", 00:17:04.230 "base_bdevs": [ 00:17:04.230 "malloc1", 00:17:04.230 "malloc2", 00:17:04.230 "malloc3", 00:17:04.230 "malloc4" 00:17:04.230 ], 00:17:04.230 "superblock": false, 00:17:04.230 "strip_size_kb": 64, 00:17:04.230 "method": "bdev_raid_create", 00:17:04.230 "req_id": 1 00:17:04.230 } 00:17:04.230 Got JSON-RPC error response 00:17:04.230 response: 00:17:04.230 { 00:17:04.230 "code": -17, 00:17:04.230 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:04.230 } 00:17:04.230 14:18:56 -- common/autotest_common.sh@653 -- # es=1 00:17:04.230 14:18:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:04.230 14:18:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:04.230 14:18:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:04.230 14:18:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.230 14:18:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:04.489 14:18:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:04.489 14:18:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:04.489 14:18:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:04.747 [2024-11-18 14:18:56.686609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:04.747 [2024-11-18 14:18:56.686681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.747 [2024-11-18 14:18:56.686716] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:04.747 [2024-11-18 14:18:56.686746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.747 [2024-11-18 14:18:56.688970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.747 [2024-11-18 14:18:56.689041] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:04.747 [2024-11-18 14:18:56.689116] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:04.747 [2024-11-18 14:18:56.689194] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:04.747 pt1 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.747 14:18:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.748 14:18:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.748 14:18:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.006 14:18:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.006 "name": "raid_bdev1", 00:17:05.006 "uuid": "03d7430c-cd5f-4462-8080-a4bd2a178b18", 00:17:05.006 "strip_size_kb": 64, 00:17:05.006 "state": "configuring", 00:17:05.006 "raid_level": "concat", 00:17:05.006 "superblock": true, 00:17:05.006 "num_base_bdevs": 4, 00:17:05.006 "num_base_bdevs_discovered": 1, 00:17:05.006 "num_base_bdevs_operational": 4, 00:17:05.006 "base_bdevs_list": [ 00:17:05.006 { 00:17:05.006 "name": "pt1", 00:17:05.006 "uuid": "636bf174-5a25-523a-ba67-93caa9a70c47", 00:17:05.006 "is_configured": true, 00:17:05.006 "data_offset": 2048, 00:17:05.006 "data_size": 63488 00:17:05.006 }, 00:17:05.006 { 00:17:05.006 "name": null, 00:17:05.006 "uuid": "118f0aa1-89a2-5451-bb69-41bf3f4e561c", 00:17:05.006 "is_configured": false, 00:17:05.006 "data_offset": 2048, 00:17:05.006 "data_size": 63488 00:17:05.006 }, 00:17:05.006 { 00:17:05.006 "name": null, 00:17:05.006 "uuid": "4dbb3c78-3c73-5fdc-a1b0-77f6a46be6aa", 00:17:05.006 "is_configured": false, 00:17:05.006 "data_offset": 2048, 00:17:05.006 "data_size": 63488 00:17:05.006 }, 00:17:05.006 { 00:17:05.006 "name": null, 00:17:05.006 "uuid": "a777a6f9-aad6-51de-8f16-86ec2e4ea3f3", 00:17:05.006 "is_configured": false, 00:17:05.006 "data_offset": 2048, 00:17:05.006 "data_size": 63488 00:17:05.006 } 00:17:05.006 ] 00:17:05.006 }' 00:17:05.006 14:18:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.006 14:18:56 -- common/autotest_common.sh@10 -- # set +x 00:17:05.572 14:18:57 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:05.572 14:18:57 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.572 [2024-11-18 14:18:57.630765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.572 [2024-11-18 14:18:57.630831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.572 [2024-11-18 14:18:57.630872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:05.572 [2024-11-18 14:18:57.630897] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.572 [2024-11-18 14:18:57.632027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.572 [2024-11-18 14:18:57.632093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.572 [2024-11-18 14:18:57.632177] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:05.572 [2024-11-18 14:18:57.632205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.572 pt2 00:17:05.572 14:18:57 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:05.831 [2024-11-18 14:18:57.878825] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.831 14:18:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.089 14:18:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.089 "name": "raid_bdev1", 00:17:06.089 "uuid": "03d7430c-cd5f-4462-8080-a4bd2a178b18", 00:17:06.089 "strip_size_kb": 64, 00:17:06.089 "state": "configuring", 00:17:06.089 "raid_level": "concat", 00:17:06.089 "superblock": true, 00:17:06.089 "num_base_bdevs": 4, 00:17:06.089 "num_base_bdevs_discovered": 1, 00:17:06.089 "num_base_bdevs_operational": 4, 00:17:06.089 "base_bdevs_list": [ 00:17:06.089 { 00:17:06.089 "name": "pt1", 00:17:06.089 "uuid": "636bf174-5a25-523a-ba67-93caa9a70c47", 00:17:06.089 "is_configured": true, 00:17:06.089 "data_offset": 2048, 00:17:06.089 "data_size": 63488 00:17:06.089 }, 00:17:06.089 { 00:17:06.089 "name": null, 00:17:06.089 "uuid": "118f0aa1-89a2-5451-bb69-41bf3f4e561c", 00:17:06.089 "is_configured": false, 00:17:06.089 "data_offset": 2048, 00:17:06.089 "data_size": 63488 00:17:06.089 }, 00:17:06.089 { 00:17:06.089 "name": null, 00:17:06.089 "uuid": "4dbb3c78-3c73-5fdc-a1b0-77f6a46be6aa", 00:17:06.089 "is_configured": false, 00:17:06.089 "data_offset": 2048, 00:17:06.089 "data_size": 63488 00:17:06.089 }, 00:17:06.089 { 00:17:06.089 "name": null, 00:17:06.089 "uuid": "a777a6f9-aad6-51de-8f16-86ec2e4ea3f3", 00:17:06.089 "is_configured": false, 00:17:06.089 "data_offset": 2048, 00:17:06.089 "data_size": 63488 00:17:06.089 } 00:17:06.089 ] 00:17:06.089 }' 00:17:06.089 14:18:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.089 14:18:58 -- common/autotest_common.sh@10 -- # set +x 00:17:06.655 14:18:58 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:06.655 14:18:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:06.656 14:18:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.914 [2024-11-18 14:18:58.870981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.914 [2024-11-18 14:18:58.871042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.914 [2024-11-18 14:18:58.871083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:06.914 [2024-11-18 14:18:58.871109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.914 [2024-11-18 14:18:58.871481] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.914 [2024-11-18 14:18:58.871544] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.914 [2024-11-18 14:18:58.871610] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:06.914 [2024-11-18 14:18:58.871633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.914 pt2 00:17:06.914 14:18:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:06.914 14:18:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:06.914 14:18:58 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:07.173 [2024-11-18 14:18:59.115059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:07.173 [2024-11-18 14:18:59.115140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.173 [2024-11-18 14:18:59.115190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:07.173 [2024-11-18 14:18:59.115225] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.173 [2024-11-18 14:18:59.115571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.173 [2024-11-18 14:18:59.115633] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:07.173 [2024-11-18 14:18:59.115698] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:07.173 [2024-11-18 14:18:59.115721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:07.173 pt3 00:17:07.173 14:18:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:07.173 14:18:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:07.173 14:18:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:07.431 [2024-11-18 14:18:59.303066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:07.431 [2024-11-18 14:18:59.303136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.431 [2024-11-18 14:18:59.303185] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:07.431 [2024-11-18 14:18:59.303218] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.431 [2024-11-18 14:18:59.303550] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.431 [2024-11-18 14:18:59.303613] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:07.431 [2024-11-18 14:18:59.303677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:07.431 [2024-11-18 14:18:59.303699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:07.431 [2024-11-18 14:18:59.303818] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:07.431 [2024-11-18 14:18:59.303842] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:07.431 [2024-11-18 14:18:59.303919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:17:07.431 [2024-11-18 14:18:59.304241] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:07.431 [2024-11-18 14:18:59.304265] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:07.431 [2024-11-18 14:18:59.304357] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.431 pt4 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.431 14:18:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.689 14:18:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.689 "name": "raid_bdev1", 00:17:07.689 "uuid": "03d7430c-cd5f-4462-8080-a4bd2a178b18", 00:17:07.689 "strip_size_kb": 64, 00:17:07.689 "state": "online", 00:17:07.689 "raid_level": "concat", 00:17:07.689 "superblock": true, 00:17:07.689 "num_base_bdevs": 4, 00:17:07.689 "num_base_bdevs_discovered": 4, 00:17:07.689 "num_base_bdevs_operational": 4, 00:17:07.689 "base_bdevs_list": [ 00:17:07.689 { 00:17:07.689 "name": "pt1", 00:17:07.689 "uuid": "636bf174-5a25-523a-ba67-93caa9a70c47", 00:17:07.689 "is_configured": true, 00:17:07.689 "data_offset": 2048, 00:17:07.689 "data_size": 63488 00:17:07.689 }, 00:17:07.689 { 00:17:07.689 "name": "pt2", 00:17:07.689 "uuid": "118f0aa1-89a2-5451-bb69-41bf3f4e561c", 00:17:07.689 "is_configured": true, 00:17:07.689 "data_offset": 2048, 00:17:07.689 "data_size": 63488 00:17:07.689 }, 00:17:07.689 { 00:17:07.689 "name": "pt3", 00:17:07.689 "uuid": "4dbb3c78-3c73-5fdc-a1b0-77f6a46be6aa", 00:17:07.689 "is_configured": true, 00:17:07.689 "data_offset": 2048, 00:17:07.689 "data_size": 63488 00:17:07.689 }, 00:17:07.689 { 00:17:07.689 "name": "pt4", 00:17:07.689 "uuid": "a777a6f9-aad6-51de-8f16-86ec2e4ea3f3", 00:17:07.689 "is_configured": true, 00:17:07.689 "data_offset": 2048, 00:17:07.689 "data_size": 63488 00:17:07.689 } 00:17:07.689 ] 00:17:07.689 }' 00:17:07.689 14:18:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.689 14:18:59 -- common/autotest_common.sh@10 -- # set +x 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:08.256 [2024-11-18 14:19:00.299416] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@430 -- # '[' 03d7430c-cd5f-4462-8080-a4bd2a178b18 '!=' 03d7430c-cd5f-4462-8080-a4bd2a178b18 ']' 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:08.256 14:19:00 -- bdev/bdev_raid.sh@511 -- # killprocess 130584 00:17:08.256 14:19:00 -- common/autotest_common.sh@936 -- # '[' -z 130584 ']' 00:17:08.256 14:19:00 -- common/autotest_common.sh@940 -- # kill -0 130584 00:17:08.256 14:19:00 -- common/autotest_common.sh@941 -- # uname 00:17:08.256 14:19:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.256 14:19:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130584 00:17:08.515 14:19:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:08.515 14:19:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:08.515 14:19:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130584' 00:17:08.515 killing process with pid 130584 00:17:08.515 14:19:00 -- common/autotest_common.sh@955 -- # kill 130584 00:17:08.515 [2024-11-18 14:19:00.337123] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.515 [2024-11-18 14:19:00.337181] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.515 14:19:00 -- common/autotest_common.sh@960 -- # wait 130584 00:17:08.515 [2024-11-18 14:19:00.337236] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.515 [2024-11-18 14:19:00.337247] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:17:08.515 [2024-11-18 14:19:00.387444] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:08.774 14:19:00 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:08.774 00:17:08.774 real 0m9.848s 00:17:08.774 user 0m17.872s 00:17:08.774 sys 0m1.228s 00:17:08.774 14:19:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:08.774 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 ************************************ 00:17:08.775 END TEST raid_superblock_test 00:17:08.775 ************************************ 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:08.775 14:19:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:08.775 14:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.775 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 ************************************ 00:17:08.775 START TEST raid_state_function_test 00:17:08.775 ************************************ 00:17:08.775 14:19:00 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=130893 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130893' 00:17:08.775 Process raid pid: 130893 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:08.775 14:19:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130893 /var/tmp/spdk-raid.sock 00:17:08.775 14:19:00 -- common/autotest_common.sh@829 -- # '[' -z 130893 ']' 00:17:08.775 14:19:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:08.775 14:19:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.775 14:19:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:08.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:08.775 14:19:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.775 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:17:08.775 [2024-11-18 14:19:00.791794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:08.775 [2024-11-18 14:19:00.792010] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.033 [2024-11-18 14:19:00.937360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.033 [2024-11-18 14:19:01.018675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.033 [2024-11-18 14:19:01.089129] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:09.968 14:19:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.969 14:19:01 -- common/autotest_common.sh@862 -- # return 0 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:09.969 [2024-11-18 14:19:01.903576] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.969 [2024-11-18 14:19:01.903668] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.969 [2024-11-18 14:19:01.903684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.969 [2024-11-18 14:19:01.903705] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.969 [2024-11-18 14:19:01.903714] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.969 [2024-11-18 14:19:01.903760] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.969 [2024-11-18 14:19:01.903771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:09.969 [2024-11-18 14:19:01.903804] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.969 14:19:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.228 14:19:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.228 "name": "Existed_Raid", 00:17:10.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.228 "strip_size_kb": 0, 00:17:10.228 "state": "configuring", 00:17:10.228 "raid_level": "raid1", 00:17:10.228 "superblock": false, 00:17:10.228 "num_base_bdevs": 4, 00:17:10.228 "num_base_bdevs_discovered": 0, 00:17:10.228 "num_base_bdevs_operational": 4, 00:17:10.228 "base_bdevs_list": [ 00:17:10.228 { 00:17:10.228 "name": "BaseBdev1", 00:17:10.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.228 "is_configured": false, 00:17:10.228 "data_offset": 0, 00:17:10.228 "data_size": 0 00:17:10.228 }, 00:17:10.228 { 00:17:10.228 "name": "BaseBdev2", 00:17:10.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.228 "is_configured": false, 00:17:10.228 "data_offset": 0, 00:17:10.228 "data_size": 0 00:17:10.228 }, 00:17:10.228 { 00:17:10.228 "name": "BaseBdev3", 00:17:10.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.228 "is_configured": false, 00:17:10.228 "data_offset": 0, 00:17:10.228 "data_size": 0 00:17:10.228 }, 00:17:10.228 { 00:17:10.228 "name": "BaseBdev4", 00:17:10.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.228 "is_configured": false, 00:17:10.228 "data_offset": 0, 00:17:10.228 "data_size": 0 00:17:10.228 } 00:17:10.228 ] 00:17:10.228 }' 00:17:10.228 14:19:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.228 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:17:10.794 14:19:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:11.053 [2024-11-18 14:19:02.967593] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.053 [2024-11-18 14:19:02.967626] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:11.053 14:19:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:11.312 [2024-11-18 14:19:03.219687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.312 [2024-11-18 14:19:03.219737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.312 [2024-11-18 14:19:03.219748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.312 [2024-11-18 14:19:03.219776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.312 [2024-11-18 14:19:03.219786] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:11.312 [2024-11-18 14:19:03.219804] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:11.312 [2024-11-18 14:19:03.219812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:11.312 [2024-11-18 14:19:03.219841] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:11.312 14:19:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.571 [2024-11-18 14:19:03.421818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.571 BaseBdev1 00:17:11.571 14:19:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:11.571 14:19:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:11.571 14:19:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.571 14:19:03 -- common/autotest_common.sh@899 -- # local i 00:17:11.571 14:19:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.571 14:19:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.571 14:19:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.571 14:19:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.830 [ 00:17:11.830 { 00:17:11.830 "name": "BaseBdev1", 00:17:11.830 "aliases": [ 00:17:11.830 "e5746e89-88eb-4d83-be73-277102142b1a" 00:17:11.830 ], 00:17:11.830 "product_name": "Malloc disk", 00:17:11.830 "block_size": 512, 00:17:11.830 "num_blocks": 65536, 00:17:11.830 "uuid": "e5746e89-88eb-4d83-be73-277102142b1a", 00:17:11.830 "assigned_rate_limits": { 00:17:11.830 "rw_ios_per_sec": 0, 00:17:11.830 "rw_mbytes_per_sec": 0, 00:17:11.830 "r_mbytes_per_sec": 0, 00:17:11.830 "w_mbytes_per_sec": 0 00:17:11.830 }, 00:17:11.830 "claimed": true, 00:17:11.830 "claim_type": "exclusive_write", 00:17:11.830 "zoned": false, 00:17:11.830 "supported_io_types": { 00:17:11.830 "read": true, 00:17:11.830 "write": true, 00:17:11.830 "unmap": true, 00:17:11.830 "write_zeroes": true, 00:17:11.830 "flush": true, 00:17:11.830 "reset": true, 00:17:11.830 "compare": false, 00:17:11.830 "compare_and_write": false, 00:17:11.830 "abort": true, 00:17:11.830 "nvme_admin": false, 00:17:11.830 "nvme_io": false 00:17:11.830 }, 00:17:11.830 "memory_domains": [ 00:17:11.830 { 00:17:11.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.830 "dma_device_type": 2 00:17:11.830 } 00:17:11.830 ], 00:17:11.830 "driver_specific": {} 00:17:11.830 } 00:17:11.830 ] 00:17:11.830 14:19:03 -- common/autotest_common.sh@905 -- # return 0 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.830 14:19:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.087 14:19:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.087 "name": "Existed_Raid", 00:17:12.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.087 "strip_size_kb": 0, 00:17:12.087 "state": "configuring", 00:17:12.087 "raid_level": "raid1", 00:17:12.087 "superblock": false, 00:17:12.087 "num_base_bdevs": 4, 00:17:12.087 "num_base_bdevs_discovered": 1, 00:17:12.087 "num_base_bdevs_operational": 4, 00:17:12.087 "base_bdevs_list": [ 00:17:12.087 { 00:17:12.087 "name": "BaseBdev1", 00:17:12.087 "uuid": "e5746e89-88eb-4d83-be73-277102142b1a", 00:17:12.087 "is_configured": true, 00:17:12.087 "data_offset": 0, 00:17:12.087 "data_size": 65536 00:17:12.087 }, 00:17:12.087 { 00:17:12.087 "name": "BaseBdev2", 00:17:12.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.087 "is_configured": false, 00:17:12.087 "data_offset": 0, 00:17:12.087 "data_size": 0 00:17:12.087 }, 00:17:12.087 { 00:17:12.087 "name": "BaseBdev3", 00:17:12.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.087 "is_configured": false, 00:17:12.087 "data_offset": 0, 00:17:12.087 "data_size": 0 00:17:12.087 }, 00:17:12.087 { 00:17:12.087 "name": "BaseBdev4", 00:17:12.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.087 "is_configured": false, 00:17:12.087 "data_offset": 0, 00:17:12.087 "data_size": 0 00:17:12.087 } 00:17:12.087 ] 00:17:12.087 }' 00:17:12.087 14:19:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.087 14:19:04 -- common/autotest_common.sh@10 -- # set +x 00:17:12.652 14:19:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:12.911 [2024-11-18 14:19:04.818067] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:12.911 [2024-11-18 14:19:04.818122] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:12.911 14:19:04 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:12.911 14:19:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:13.169 [2024-11-18 14:19:05.006175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.169 [2024-11-18 14:19:05.008154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.170 [2024-11-18 14:19:05.008239] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.170 [2024-11-18 14:19:05.008253] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:13.170 [2024-11-18 14:19:05.008282] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:13.170 [2024-11-18 14:19:05.008292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:13.170 [2024-11-18 14:19:05.008311] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.170 "name": "Existed_Raid", 00:17:13.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.170 "strip_size_kb": 0, 00:17:13.170 "state": "configuring", 00:17:13.170 "raid_level": "raid1", 00:17:13.170 "superblock": false, 00:17:13.170 "num_base_bdevs": 4, 00:17:13.170 "num_base_bdevs_discovered": 1, 00:17:13.170 "num_base_bdevs_operational": 4, 00:17:13.170 "base_bdevs_list": [ 00:17:13.170 { 00:17:13.170 "name": "BaseBdev1", 00:17:13.170 "uuid": "e5746e89-88eb-4d83-be73-277102142b1a", 00:17:13.170 "is_configured": true, 00:17:13.170 "data_offset": 0, 00:17:13.170 "data_size": 65536 00:17:13.170 }, 00:17:13.170 { 00:17:13.170 "name": "BaseBdev2", 00:17:13.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.170 "is_configured": false, 00:17:13.170 "data_offset": 0, 00:17:13.170 "data_size": 0 00:17:13.170 }, 00:17:13.170 { 00:17:13.170 "name": "BaseBdev3", 00:17:13.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.170 "is_configured": false, 00:17:13.170 "data_offset": 0, 00:17:13.170 "data_size": 0 00:17:13.170 }, 00:17:13.170 { 00:17:13.170 "name": "BaseBdev4", 00:17:13.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.170 "is_configured": false, 00:17:13.170 "data_offset": 0, 00:17:13.170 "data_size": 0 00:17:13.170 } 00:17:13.170 ] 00:17:13.170 }' 00:17:13.170 14:19:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.170 14:19:05 -- common/autotest_common.sh@10 -- # set +x 00:17:14.105 14:19:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:14.105 [2024-11-18 14:19:06.091933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.105 BaseBdev2 00:17:14.105 14:19:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:14.105 14:19:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:14.105 14:19:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:14.105 14:19:06 -- common/autotest_common.sh@899 -- # local i 00:17:14.105 14:19:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:14.105 14:19:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:14.105 14:19:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.363 14:19:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:14.621 [ 00:17:14.621 { 00:17:14.621 "name": "BaseBdev2", 00:17:14.621 "aliases": [ 00:17:14.621 "1ef8e1e4-3a7e-4cbe-8317-eba07e1dff07" 00:17:14.621 ], 00:17:14.621 "product_name": "Malloc disk", 00:17:14.621 "block_size": 512, 00:17:14.621 "num_blocks": 65536, 00:17:14.621 "uuid": "1ef8e1e4-3a7e-4cbe-8317-eba07e1dff07", 00:17:14.621 "assigned_rate_limits": { 00:17:14.621 "rw_ios_per_sec": 0, 00:17:14.621 "rw_mbytes_per_sec": 0, 00:17:14.621 "r_mbytes_per_sec": 0, 00:17:14.621 "w_mbytes_per_sec": 0 00:17:14.621 }, 00:17:14.621 "claimed": true, 00:17:14.621 "claim_type": "exclusive_write", 00:17:14.621 "zoned": false, 00:17:14.621 "supported_io_types": { 00:17:14.621 "read": true, 00:17:14.621 "write": true, 00:17:14.621 "unmap": true, 00:17:14.621 "write_zeroes": true, 00:17:14.621 "flush": true, 00:17:14.621 "reset": true, 00:17:14.621 "compare": false, 00:17:14.621 "compare_and_write": false, 00:17:14.621 "abort": true, 00:17:14.621 "nvme_admin": false, 00:17:14.621 "nvme_io": false 00:17:14.621 }, 00:17:14.621 "memory_domains": [ 00:17:14.621 { 00:17:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.621 "dma_device_type": 2 00:17:14.621 } 00:17:14.621 ], 00:17:14.621 "driver_specific": {} 00:17:14.621 } 00:17:14.621 ] 00:17:14.621 14:19:06 -- common/autotest_common.sh@905 -- # return 0 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.621 14:19:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.880 14:19:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.880 "name": "Existed_Raid", 00:17:14.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.880 "strip_size_kb": 0, 00:17:14.880 "state": "configuring", 00:17:14.880 "raid_level": "raid1", 00:17:14.880 "superblock": false, 00:17:14.880 "num_base_bdevs": 4, 00:17:14.880 "num_base_bdevs_discovered": 2, 00:17:14.880 "num_base_bdevs_operational": 4, 00:17:14.880 "base_bdevs_list": [ 00:17:14.880 { 00:17:14.880 "name": "BaseBdev1", 00:17:14.880 "uuid": "e5746e89-88eb-4d83-be73-277102142b1a", 00:17:14.880 "is_configured": true, 00:17:14.880 "data_offset": 0, 00:17:14.880 "data_size": 65536 00:17:14.880 }, 00:17:14.880 { 00:17:14.880 "name": "BaseBdev2", 00:17:14.880 "uuid": "1ef8e1e4-3a7e-4cbe-8317-eba07e1dff07", 00:17:14.880 "is_configured": true, 00:17:14.880 "data_offset": 0, 00:17:14.880 "data_size": 65536 00:17:14.880 }, 00:17:14.880 { 00:17:14.880 "name": "BaseBdev3", 00:17:14.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.880 "is_configured": false, 00:17:14.880 "data_offset": 0, 00:17:14.880 "data_size": 0 00:17:14.880 }, 00:17:14.880 { 00:17:14.880 "name": "BaseBdev4", 00:17:14.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.880 "is_configured": false, 00:17:14.880 "data_offset": 0, 00:17:14.880 "data_size": 0 00:17:14.880 } 00:17:14.880 ] 00:17:14.880 }' 00:17:14.880 14:19:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.880 14:19:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.446 14:19:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:15.446 [2024-11-18 14:19:07.511996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.446 BaseBdev3 00:17:15.704 14:19:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:15.704 14:19:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:15.704 14:19:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:15.704 14:19:07 -- common/autotest_common.sh@899 -- # local i 00:17:15.704 14:19:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:15.704 14:19:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:15.704 14:19:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.704 14:19:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:15.962 [ 00:17:15.962 { 00:17:15.962 "name": "BaseBdev3", 00:17:15.962 "aliases": [ 00:17:15.962 "a4cd3b00-c35c-4bfa-900a-59d08d9dfac0" 00:17:15.962 ], 00:17:15.962 "product_name": "Malloc disk", 00:17:15.962 "block_size": 512, 00:17:15.962 "num_blocks": 65536, 00:17:15.962 "uuid": "a4cd3b00-c35c-4bfa-900a-59d08d9dfac0", 00:17:15.962 "assigned_rate_limits": { 00:17:15.962 "rw_ios_per_sec": 0, 00:17:15.962 "rw_mbytes_per_sec": 0, 00:17:15.962 "r_mbytes_per_sec": 0, 00:17:15.962 "w_mbytes_per_sec": 0 00:17:15.962 }, 00:17:15.962 "claimed": true, 00:17:15.962 "claim_type": "exclusive_write", 00:17:15.962 "zoned": false, 00:17:15.962 "supported_io_types": { 00:17:15.962 "read": true, 00:17:15.962 "write": true, 00:17:15.962 "unmap": true, 00:17:15.962 "write_zeroes": true, 00:17:15.962 "flush": true, 00:17:15.962 "reset": true, 00:17:15.962 "compare": false, 00:17:15.962 "compare_and_write": false, 00:17:15.962 "abort": true, 00:17:15.962 "nvme_admin": false, 00:17:15.962 "nvme_io": false 00:17:15.962 }, 00:17:15.962 "memory_domains": [ 00:17:15.962 { 00:17:15.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.962 "dma_device_type": 2 00:17:15.962 } 00:17:15.962 ], 00:17:15.962 "driver_specific": {} 00:17:15.962 } 00:17:15.963 ] 00:17:15.963 14:19:07 -- common/autotest_common.sh@905 -- # return 0 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.963 14:19:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.221 14:19:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.221 "name": "Existed_Raid", 00:17:16.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.221 "strip_size_kb": 0, 00:17:16.221 "state": "configuring", 00:17:16.221 "raid_level": "raid1", 00:17:16.221 "superblock": false, 00:17:16.221 "num_base_bdevs": 4, 00:17:16.221 "num_base_bdevs_discovered": 3, 00:17:16.221 "num_base_bdevs_operational": 4, 00:17:16.221 "base_bdevs_list": [ 00:17:16.221 { 00:17:16.221 "name": "BaseBdev1", 00:17:16.221 "uuid": "e5746e89-88eb-4d83-be73-277102142b1a", 00:17:16.221 "is_configured": true, 00:17:16.221 "data_offset": 0, 00:17:16.221 "data_size": 65536 00:17:16.221 }, 00:17:16.221 { 00:17:16.221 "name": "BaseBdev2", 00:17:16.221 "uuid": "1ef8e1e4-3a7e-4cbe-8317-eba07e1dff07", 00:17:16.221 "is_configured": true, 00:17:16.221 "data_offset": 0, 00:17:16.221 "data_size": 65536 00:17:16.221 }, 00:17:16.221 { 00:17:16.221 "name": "BaseBdev3", 00:17:16.221 "uuid": "a4cd3b00-c35c-4bfa-900a-59d08d9dfac0", 00:17:16.221 "is_configured": true, 00:17:16.221 "data_offset": 0, 00:17:16.221 "data_size": 65536 00:17:16.221 }, 00:17:16.221 { 00:17:16.221 "name": "BaseBdev4", 00:17:16.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.221 "is_configured": false, 00:17:16.221 "data_offset": 0, 00:17:16.221 "data_size": 0 00:17:16.221 } 00:17:16.221 ] 00:17:16.221 }' 00:17:16.221 14:19:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.221 14:19:08 -- common/autotest_common.sh@10 -- # set +x 00:17:16.787 14:19:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:17.045 [2024-11-18 14:19:08.904181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:17.045 [2024-11-18 14:19:08.904255] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:17.045 [2024-11-18 14:19:08.904268] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:17.045 [2024-11-18 14:19:08.904451] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:17.045 [2024-11-18 14:19:08.904879] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:17.045 [2024-11-18 14:19:08.904903] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:17.045 [2024-11-18 14:19:08.905161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.045 BaseBdev4 00:17:17.045 14:19:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:17.045 14:19:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:17.045 14:19:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:17.045 14:19:08 -- common/autotest_common.sh@899 -- # local i 00:17:17.045 14:19:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:17.046 14:19:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:17.046 14:19:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.046 14:19:09 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:17.307 [ 00:17:17.307 { 00:17:17.307 "name": "BaseBdev4", 00:17:17.307 "aliases": [ 00:17:17.307 "7899cd84-6e82-4df9-8142-be70616c78d6" 00:17:17.307 ], 00:17:17.307 "product_name": "Malloc disk", 00:17:17.307 "block_size": 512, 00:17:17.307 "num_blocks": 65536, 00:17:17.307 "uuid": "7899cd84-6e82-4df9-8142-be70616c78d6", 00:17:17.307 "assigned_rate_limits": { 00:17:17.307 "rw_ios_per_sec": 0, 00:17:17.307 "rw_mbytes_per_sec": 0, 00:17:17.307 "r_mbytes_per_sec": 0, 00:17:17.307 "w_mbytes_per_sec": 0 00:17:17.307 }, 00:17:17.307 "claimed": true, 00:17:17.307 "claim_type": "exclusive_write", 00:17:17.307 "zoned": false, 00:17:17.307 "supported_io_types": { 00:17:17.307 "read": true, 00:17:17.307 "write": true, 00:17:17.307 "unmap": true, 00:17:17.307 "write_zeroes": true, 00:17:17.307 "flush": true, 00:17:17.307 "reset": true, 00:17:17.307 "compare": false, 00:17:17.307 "compare_and_write": false, 00:17:17.307 "abort": true, 00:17:17.307 "nvme_admin": false, 00:17:17.307 "nvme_io": false 00:17:17.307 }, 00:17:17.307 "memory_domains": [ 00:17:17.307 { 00:17:17.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.307 "dma_device_type": 2 00:17:17.308 } 00:17:17.308 ], 00:17:17.308 "driver_specific": {} 00:17:17.308 } 00:17:17.308 ] 00:17:17.308 14:19:09 -- common/autotest_common.sh@905 -- # return 0 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.308 14:19:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.567 14:19:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.567 "name": "Existed_Raid", 00:17:17.567 "uuid": "7e3f16c2-9fcf-40ea-814d-cfba7420f839", 00:17:17.567 "strip_size_kb": 0, 00:17:17.567 "state": "online", 00:17:17.567 "raid_level": "raid1", 00:17:17.567 "superblock": false, 00:17:17.567 "num_base_bdevs": 4, 00:17:17.567 "num_base_bdevs_discovered": 4, 00:17:17.567 "num_base_bdevs_operational": 4, 00:17:17.567 "base_bdevs_list": [ 00:17:17.567 { 00:17:17.567 "name": "BaseBdev1", 00:17:17.567 "uuid": "e5746e89-88eb-4d83-be73-277102142b1a", 00:17:17.567 "is_configured": true, 00:17:17.567 "data_offset": 0, 00:17:17.567 "data_size": 65536 00:17:17.567 }, 00:17:17.567 { 00:17:17.567 "name": "BaseBdev2", 00:17:17.567 "uuid": "1ef8e1e4-3a7e-4cbe-8317-eba07e1dff07", 00:17:17.567 "is_configured": true, 00:17:17.567 "data_offset": 0, 00:17:17.567 "data_size": 65536 00:17:17.567 }, 00:17:17.567 { 00:17:17.567 "name": "BaseBdev3", 00:17:17.567 "uuid": "a4cd3b00-c35c-4bfa-900a-59d08d9dfac0", 00:17:17.567 "is_configured": true, 00:17:17.567 "data_offset": 0, 00:17:17.567 "data_size": 65536 00:17:17.567 }, 00:17:17.567 { 00:17:17.567 "name": "BaseBdev4", 00:17:17.567 "uuid": "7899cd84-6e82-4df9-8142-be70616c78d6", 00:17:17.567 "is_configured": true, 00:17:17.567 "data_offset": 0, 00:17:17.567 "data_size": 65536 00:17:17.567 } 00:17:17.567 ] 00:17:17.567 }' 00:17:17.567 14:19:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.567 14:19:09 -- common/autotest_common.sh@10 -- # set +x 00:17:18.133 14:19:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:18.391 [2024-11-18 14:19:10.304603] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.391 14:19:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.654 14:19:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.654 "name": "Existed_Raid", 00:17:18.654 "uuid": "7e3f16c2-9fcf-40ea-814d-cfba7420f839", 00:17:18.654 "strip_size_kb": 0, 00:17:18.654 "state": "online", 00:17:18.654 "raid_level": "raid1", 00:17:18.654 "superblock": false, 00:17:18.654 "num_base_bdevs": 4, 00:17:18.654 "num_base_bdevs_discovered": 3, 00:17:18.654 "num_base_bdevs_operational": 3, 00:17:18.654 "base_bdevs_list": [ 00:17:18.654 { 00:17:18.654 "name": null, 00:17:18.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.654 "is_configured": false, 00:17:18.654 "data_offset": 0, 00:17:18.654 "data_size": 65536 00:17:18.654 }, 00:17:18.654 { 00:17:18.654 "name": "BaseBdev2", 00:17:18.654 "uuid": "1ef8e1e4-3a7e-4cbe-8317-eba07e1dff07", 00:17:18.654 "is_configured": true, 00:17:18.654 "data_offset": 0, 00:17:18.654 "data_size": 65536 00:17:18.654 }, 00:17:18.654 { 00:17:18.654 "name": "BaseBdev3", 00:17:18.654 "uuid": "a4cd3b00-c35c-4bfa-900a-59d08d9dfac0", 00:17:18.654 "is_configured": true, 00:17:18.654 "data_offset": 0, 00:17:18.654 "data_size": 65536 00:17:18.654 }, 00:17:18.654 { 00:17:18.654 "name": "BaseBdev4", 00:17:18.654 "uuid": "7899cd84-6e82-4df9-8142-be70616c78d6", 00:17:18.654 "is_configured": true, 00:17:18.654 "data_offset": 0, 00:17:18.654 "data_size": 65536 00:17:18.654 } 00:17:18.654 ] 00:17:18.654 }' 00:17:18.654 14:19:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.654 14:19:10 -- common/autotest_common.sh@10 -- # set +x 00:17:19.291 14:19:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:19.291 14:19:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:19.291 14:19:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.291 14:19:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:19.554 14:19:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:19.554 14:19:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:19.554 14:19:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:19.813 [2024-11-18 14:19:11.674286] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:19.813 14:19:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:19.813 14:19:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:19.813 14:19:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.813 14:19:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.071 14:19:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.071 14:19:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.071 14:19:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:20.330 [2024-11-18 14:19:12.160329] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:20.330 14:19:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:20.330 14:19:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.330 14:19:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.330 14:19:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:20.589 14:19:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:20.589 14:19:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:20.589 14:19:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:20.847 [2024-11-18 14:19:12.681494] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:20.848 [2024-11-18 14:19:12.681527] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.848 [2024-11-18 14:19:12.681621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.848 [2024-11-18 14:19:12.694619] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.848 [2024-11-18 14:19:12.694652] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:20.848 14:19:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:20.848 14:19:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:20.848 14:19:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.848 14:19:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.106 14:19:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:21.106 14:19:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:21.106 14:19:12 -- bdev/bdev_raid.sh@287 -- # killprocess 130893 00:17:21.106 14:19:12 -- common/autotest_common.sh@936 -- # '[' -z 130893 ']' 00:17:21.106 14:19:12 -- common/autotest_common.sh@940 -- # kill -0 130893 00:17:21.106 14:19:12 -- common/autotest_common.sh@941 -- # uname 00:17:21.106 14:19:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.106 14:19:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130893 00:17:21.106 14:19:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:21.106 14:19:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:21.106 14:19:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130893' 00:17:21.106 killing process with pid 130893 00:17:21.106 14:19:12 -- common/autotest_common.sh@955 -- # kill 130893 00:17:21.106 [2024-11-18 14:19:12.971964] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.106 [2024-11-18 14:19:12.972059] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.106 14:19:12 -- common/autotest_common.sh@960 -- # wait 130893 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:21.365 00:17:21.365 real 0m12.530s 00:17:21.365 user 0m23.153s 00:17:21.365 sys 0m1.527s 00:17:21.365 14:19:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:21.365 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:17:21.365 ************************************ 00:17:21.365 END TEST raid_state_function_test 00:17:21.365 ************************************ 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:21.365 14:19:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:21.365 14:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.365 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:17:21.365 ************************************ 00:17:21.365 START TEST raid_state_function_test_sb 00:17:21.365 ************************************ 00:17:21.365 14:19:13 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=131313 00:17:21.365 Process raid pid: 131313 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131313' 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:21.365 14:19:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131313 /var/tmp/spdk-raid.sock 00:17:21.365 14:19:13 -- common/autotest_common.sh@829 -- # '[' -z 131313 ']' 00:17:21.365 14:19:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:21.365 14:19:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.365 14:19:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:21.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:21.365 14:19:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.365 14:19:13 -- common/autotest_common.sh@10 -- # set +x 00:17:21.365 [2024-11-18 14:19:13.381266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:21.365 [2024-11-18 14:19:13.381507] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.624 [2024-11-18 14:19:13.529046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.624 [2024-11-18 14:19:13.598282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.624 [2024-11-18 14:19:13.668495] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.561 14:19:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.561 14:19:14 -- common/autotest_common.sh@862 -- # return 0 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:22.561 [2024-11-18 14:19:14.558874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:22.561 [2024-11-18 14:19:14.558967] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:22.561 [2024-11-18 14:19:14.558983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.561 [2024-11-18 14:19:14.559005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.561 [2024-11-18 14:19:14.559013] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.561 [2024-11-18 14:19:14.559057] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.561 [2024-11-18 14:19:14.559067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:22.561 [2024-11-18 14:19:14.559097] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.561 14:19:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.819 14:19:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.819 "name": "Existed_Raid", 00:17:22.819 "uuid": "5670ce87-f59a-469b-b37d-6f9335c41352", 00:17:22.819 "strip_size_kb": 0, 00:17:22.819 "state": "configuring", 00:17:22.819 "raid_level": "raid1", 00:17:22.819 "superblock": true, 00:17:22.819 "num_base_bdevs": 4, 00:17:22.819 "num_base_bdevs_discovered": 0, 00:17:22.819 "num_base_bdevs_operational": 4, 00:17:22.819 "base_bdevs_list": [ 00:17:22.819 { 00:17:22.819 "name": "BaseBdev1", 00:17:22.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.819 "is_configured": false, 00:17:22.819 "data_offset": 0, 00:17:22.820 "data_size": 0 00:17:22.820 }, 00:17:22.820 { 00:17:22.820 "name": "BaseBdev2", 00:17:22.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.820 "is_configured": false, 00:17:22.820 "data_offset": 0, 00:17:22.820 "data_size": 0 00:17:22.820 }, 00:17:22.820 { 00:17:22.820 "name": "BaseBdev3", 00:17:22.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.820 "is_configured": false, 00:17:22.820 "data_offset": 0, 00:17:22.820 "data_size": 0 00:17:22.820 }, 00:17:22.820 { 00:17:22.820 "name": "BaseBdev4", 00:17:22.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.820 "is_configured": false, 00:17:22.820 "data_offset": 0, 00:17:22.820 "data_size": 0 00:17:22.820 } 00:17:22.820 ] 00:17:22.820 }' 00:17:22.820 14:19:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.820 14:19:14 -- common/autotest_common.sh@10 -- # set +x 00:17:23.386 14:19:15 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:23.645 [2024-11-18 14:19:15.650874] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:23.645 [2024-11-18 14:19:15.650910] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:23.645 14:19:15 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:23.904 [2024-11-18 14:19:15.838944] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.904 [2024-11-18 14:19:15.838995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.904 [2024-11-18 14:19:15.839007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.904 [2024-11-18 14:19:15.839035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.904 [2024-11-18 14:19:15.839045] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.904 [2024-11-18 14:19:15.839063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.904 [2024-11-18 14:19:15.839071] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:23.904 [2024-11-18 14:19:15.839098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:23.904 14:19:15 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.163 [2024-11-18 14:19:16.040969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.163 BaseBdev1 00:17:24.163 14:19:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:24.163 14:19:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:24.163 14:19:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.163 14:19:16 -- common/autotest_common.sh@899 -- # local i 00:17:24.163 14:19:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.163 14:19:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.163 14:19:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.163 14:19:16 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.421 [ 00:17:24.421 { 00:17:24.421 "name": "BaseBdev1", 00:17:24.421 "aliases": [ 00:17:24.421 "5856a28e-9d58-4c8a-97c9-b9aeb708e38d" 00:17:24.421 ], 00:17:24.422 "product_name": "Malloc disk", 00:17:24.422 "block_size": 512, 00:17:24.422 "num_blocks": 65536, 00:17:24.422 "uuid": "5856a28e-9d58-4c8a-97c9-b9aeb708e38d", 00:17:24.422 "assigned_rate_limits": { 00:17:24.422 "rw_ios_per_sec": 0, 00:17:24.422 "rw_mbytes_per_sec": 0, 00:17:24.422 "r_mbytes_per_sec": 0, 00:17:24.422 "w_mbytes_per_sec": 0 00:17:24.422 }, 00:17:24.422 "claimed": true, 00:17:24.422 "claim_type": "exclusive_write", 00:17:24.422 "zoned": false, 00:17:24.422 "supported_io_types": { 00:17:24.422 "read": true, 00:17:24.422 "write": true, 00:17:24.422 "unmap": true, 00:17:24.422 "write_zeroes": true, 00:17:24.422 "flush": true, 00:17:24.422 "reset": true, 00:17:24.422 "compare": false, 00:17:24.422 "compare_and_write": false, 00:17:24.422 "abort": true, 00:17:24.422 "nvme_admin": false, 00:17:24.422 "nvme_io": false 00:17:24.422 }, 00:17:24.422 "memory_domains": [ 00:17:24.422 { 00:17:24.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.422 "dma_device_type": 2 00:17:24.422 } 00:17:24.422 ], 00:17:24.422 "driver_specific": {} 00:17:24.422 } 00:17:24.422 ] 00:17:24.422 14:19:16 -- common/autotest_common.sh@905 -- # return 0 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.422 14:19:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.680 14:19:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.680 "name": "Existed_Raid", 00:17:24.680 "uuid": "588865b9-89c4-4614-93a7-a27f686909f7", 00:17:24.680 "strip_size_kb": 0, 00:17:24.680 "state": "configuring", 00:17:24.680 "raid_level": "raid1", 00:17:24.680 "superblock": true, 00:17:24.680 "num_base_bdevs": 4, 00:17:24.680 "num_base_bdevs_discovered": 1, 00:17:24.680 "num_base_bdevs_operational": 4, 00:17:24.680 "base_bdevs_list": [ 00:17:24.680 { 00:17:24.680 "name": "BaseBdev1", 00:17:24.680 "uuid": "5856a28e-9d58-4c8a-97c9-b9aeb708e38d", 00:17:24.680 "is_configured": true, 00:17:24.680 "data_offset": 2048, 00:17:24.680 "data_size": 63488 00:17:24.680 }, 00:17:24.680 { 00:17:24.680 "name": "BaseBdev2", 00:17:24.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.680 "is_configured": false, 00:17:24.680 "data_offset": 0, 00:17:24.680 "data_size": 0 00:17:24.680 }, 00:17:24.680 { 00:17:24.680 "name": "BaseBdev3", 00:17:24.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.680 "is_configured": false, 00:17:24.680 "data_offset": 0, 00:17:24.680 "data_size": 0 00:17:24.680 }, 00:17:24.680 { 00:17:24.680 "name": "BaseBdev4", 00:17:24.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.680 "is_configured": false, 00:17:24.680 "data_offset": 0, 00:17:24.680 "data_size": 0 00:17:24.680 } 00:17:24.680 ] 00:17:24.680 }' 00:17:24.680 14:19:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.680 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:17:25.248 14:19:17 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:25.507 [2024-11-18 14:19:17.469197] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.507 [2024-11-18 14:19:17.469395] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:25.507 14:19:17 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:25.507 14:19:17 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:25.766 14:19:17 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:26.025 BaseBdev1 00:17:26.025 14:19:17 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:26.025 14:19:17 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:26.025 14:19:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:26.025 14:19:17 -- common/autotest_common.sh@899 -- # local i 00:17:26.025 14:19:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:26.025 14:19:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:26.025 14:19:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:26.284 14:19:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:26.284 [ 00:17:26.284 { 00:17:26.284 "name": "BaseBdev1", 00:17:26.284 "aliases": [ 00:17:26.284 "ef5243ca-0404-4252-b45c-fb2b630b14c0" 00:17:26.284 ], 00:17:26.284 "product_name": "Malloc disk", 00:17:26.284 "block_size": 512, 00:17:26.284 "num_blocks": 65536, 00:17:26.284 "uuid": "ef5243ca-0404-4252-b45c-fb2b630b14c0", 00:17:26.284 "assigned_rate_limits": { 00:17:26.284 "rw_ios_per_sec": 0, 00:17:26.284 "rw_mbytes_per_sec": 0, 00:17:26.284 "r_mbytes_per_sec": 0, 00:17:26.284 "w_mbytes_per_sec": 0 00:17:26.284 }, 00:17:26.284 "claimed": false, 00:17:26.284 "zoned": false, 00:17:26.284 "supported_io_types": { 00:17:26.284 "read": true, 00:17:26.284 "write": true, 00:17:26.284 "unmap": true, 00:17:26.284 "write_zeroes": true, 00:17:26.284 "flush": true, 00:17:26.284 "reset": true, 00:17:26.284 "compare": false, 00:17:26.284 "compare_and_write": false, 00:17:26.284 "abort": true, 00:17:26.284 "nvme_admin": false, 00:17:26.284 "nvme_io": false 00:17:26.284 }, 00:17:26.284 "memory_domains": [ 00:17:26.284 { 00:17:26.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.284 "dma_device_type": 2 00:17:26.284 } 00:17:26.284 ], 00:17:26.284 "driver_specific": {} 00:17:26.284 } 00:17:26.284 ] 00:17:26.543 14:19:18 -- common/autotest_common.sh@905 -- # return 0 00:17:26.543 14:19:18 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.543 [2024-11-18 14:19:18.530998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.543 [2024-11-18 14:19:18.536471] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.543 [2024-11-18 14:19:18.536867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.543 [2024-11-18 14:19:18.537110] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.544 [2024-11-18 14:19:18.537418] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.544 [2024-11-18 14:19:18.537638] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.544 [2024-11-18 14:19:18.537907] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.544 14:19:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.803 14:19:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.803 "name": "Existed_Raid", 00:17:26.803 "uuid": "217e9c40-5233-46b3-99ca-17a25aeb5b9b", 00:17:26.803 "strip_size_kb": 0, 00:17:26.803 "state": "configuring", 00:17:26.803 "raid_level": "raid1", 00:17:26.803 "superblock": true, 00:17:26.803 "num_base_bdevs": 4, 00:17:26.803 "num_base_bdevs_discovered": 1, 00:17:26.803 "num_base_bdevs_operational": 4, 00:17:26.803 "base_bdevs_list": [ 00:17:26.803 { 00:17:26.803 "name": "BaseBdev1", 00:17:26.803 "uuid": "ef5243ca-0404-4252-b45c-fb2b630b14c0", 00:17:26.803 "is_configured": true, 00:17:26.803 "data_offset": 2048, 00:17:26.803 "data_size": 63488 00:17:26.803 }, 00:17:26.803 { 00:17:26.803 "name": "BaseBdev2", 00:17:26.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.803 "is_configured": false, 00:17:26.803 "data_offset": 0, 00:17:26.803 "data_size": 0 00:17:26.803 }, 00:17:26.803 { 00:17:26.803 "name": "BaseBdev3", 00:17:26.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.803 "is_configured": false, 00:17:26.803 "data_offset": 0, 00:17:26.803 "data_size": 0 00:17:26.803 }, 00:17:26.803 { 00:17:26.803 "name": "BaseBdev4", 00:17:26.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.803 "is_configured": false, 00:17:26.803 "data_offset": 0, 00:17:26.803 "data_size": 0 00:17:26.803 } 00:17:26.803 ] 00:17:26.803 }' 00:17:26.803 14:19:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.803 14:19:18 -- common/autotest_common.sh@10 -- # set +x 00:17:27.369 14:19:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.628 [2024-11-18 14:19:19.610785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.628 BaseBdev2 00:17:27.628 14:19:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:27.628 14:19:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:27.628 14:19:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.628 14:19:19 -- common/autotest_common.sh@899 -- # local i 00:17:27.628 14:19:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.628 14:19:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.628 14:19:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.886 14:19:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.145 [ 00:17:28.145 { 00:17:28.145 "name": "BaseBdev2", 00:17:28.145 "aliases": [ 00:17:28.145 "f69c6aef-cef6-495f-b0af-d08c649091cd" 00:17:28.145 ], 00:17:28.145 "product_name": "Malloc disk", 00:17:28.145 "block_size": 512, 00:17:28.145 "num_blocks": 65536, 00:17:28.145 "uuid": "f69c6aef-cef6-495f-b0af-d08c649091cd", 00:17:28.145 "assigned_rate_limits": { 00:17:28.145 "rw_ios_per_sec": 0, 00:17:28.145 "rw_mbytes_per_sec": 0, 00:17:28.145 "r_mbytes_per_sec": 0, 00:17:28.145 "w_mbytes_per_sec": 0 00:17:28.145 }, 00:17:28.145 "claimed": true, 00:17:28.145 "claim_type": "exclusive_write", 00:17:28.145 "zoned": false, 00:17:28.145 "supported_io_types": { 00:17:28.145 "read": true, 00:17:28.145 "write": true, 00:17:28.145 "unmap": true, 00:17:28.146 "write_zeroes": true, 00:17:28.146 "flush": true, 00:17:28.146 "reset": true, 00:17:28.146 "compare": false, 00:17:28.146 "compare_and_write": false, 00:17:28.146 "abort": true, 00:17:28.146 "nvme_admin": false, 00:17:28.146 "nvme_io": false 00:17:28.146 }, 00:17:28.146 "memory_domains": [ 00:17:28.146 { 00:17:28.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.146 "dma_device_type": 2 00:17:28.146 } 00:17:28.146 ], 00:17:28.146 "driver_specific": {} 00:17:28.146 } 00:17:28.146 ] 00:17:28.146 14:19:20 -- common/autotest_common.sh@905 -- # return 0 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.146 14:19:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.405 14:19:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.405 "name": "Existed_Raid", 00:17:28.405 "uuid": "217e9c40-5233-46b3-99ca-17a25aeb5b9b", 00:17:28.405 "strip_size_kb": 0, 00:17:28.405 "state": "configuring", 00:17:28.405 "raid_level": "raid1", 00:17:28.405 "superblock": true, 00:17:28.405 "num_base_bdevs": 4, 00:17:28.405 "num_base_bdevs_discovered": 2, 00:17:28.405 "num_base_bdevs_operational": 4, 00:17:28.405 "base_bdevs_list": [ 00:17:28.405 { 00:17:28.405 "name": "BaseBdev1", 00:17:28.405 "uuid": "ef5243ca-0404-4252-b45c-fb2b630b14c0", 00:17:28.405 "is_configured": true, 00:17:28.405 "data_offset": 2048, 00:17:28.405 "data_size": 63488 00:17:28.405 }, 00:17:28.405 { 00:17:28.405 "name": "BaseBdev2", 00:17:28.405 "uuid": "f69c6aef-cef6-495f-b0af-d08c649091cd", 00:17:28.405 "is_configured": true, 00:17:28.405 "data_offset": 2048, 00:17:28.405 "data_size": 63488 00:17:28.405 }, 00:17:28.405 { 00:17:28.405 "name": "BaseBdev3", 00:17:28.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.405 "is_configured": false, 00:17:28.405 "data_offset": 0, 00:17:28.405 "data_size": 0 00:17:28.405 }, 00:17:28.405 { 00:17:28.405 "name": "BaseBdev4", 00:17:28.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.405 "is_configured": false, 00:17:28.405 "data_offset": 0, 00:17:28.405 "data_size": 0 00:17:28.405 } 00:17:28.405 ] 00:17:28.405 }' 00:17:28.405 14:19:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.405 14:19:20 -- common/autotest_common.sh@10 -- # set +x 00:17:28.972 14:19:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.231 [2024-11-18 14:19:21.132296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.231 BaseBdev3 00:17:29.231 14:19:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:29.231 14:19:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:29.231 14:19:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:29.231 14:19:21 -- common/autotest_common.sh@899 -- # local i 00:17:29.231 14:19:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:29.231 14:19:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:29.231 14:19:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.490 14:19:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:29.490 [ 00:17:29.490 { 00:17:29.490 "name": "BaseBdev3", 00:17:29.490 "aliases": [ 00:17:29.490 "dff9c2e5-2d50-483e-9a86-07fbd1b7c910" 00:17:29.490 ], 00:17:29.490 "product_name": "Malloc disk", 00:17:29.490 "block_size": 512, 00:17:29.490 "num_blocks": 65536, 00:17:29.490 "uuid": "dff9c2e5-2d50-483e-9a86-07fbd1b7c910", 00:17:29.490 "assigned_rate_limits": { 00:17:29.490 "rw_ios_per_sec": 0, 00:17:29.490 "rw_mbytes_per_sec": 0, 00:17:29.490 "r_mbytes_per_sec": 0, 00:17:29.490 "w_mbytes_per_sec": 0 00:17:29.490 }, 00:17:29.490 "claimed": true, 00:17:29.490 "claim_type": "exclusive_write", 00:17:29.490 "zoned": false, 00:17:29.490 "supported_io_types": { 00:17:29.490 "read": true, 00:17:29.490 "write": true, 00:17:29.490 "unmap": true, 00:17:29.490 "write_zeroes": true, 00:17:29.490 "flush": true, 00:17:29.490 "reset": true, 00:17:29.490 "compare": false, 00:17:29.490 "compare_and_write": false, 00:17:29.490 "abort": true, 00:17:29.490 "nvme_admin": false, 00:17:29.490 "nvme_io": false 00:17:29.490 }, 00:17:29.490 "memory_domains": [ 00:17:29.490 { 00:17:29.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.490 "dma_device_type": 2 00:17:29.490 } 00:17:29.490 ], 00:17:29.490 "driver_specific": {} 00:17:29.490 } 00:17:29.490 ] 00:17:29.748 14:19:21 -- common/autotest_common.sh@905 -- # return 0 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.748 "name": "Existed_Raid", 00:17:29.748 "uuid": "217e9c40-5233-46b3-99ca-17a25aeb5b9b", 00:17:29.748 "strip_size_kb": 0, 00:17:29.748 "state": "configuring", 00:17:29.748 "raid_level": "raid1", 00:17:29.748 "superblock": true, 00:17:29.748 "num_base_bdevs": 4, 00:17:29.748 "num_base_bdevs_discovered": 3, 00:17:29.748 "num_base_bdevs_operational": 4, 00:17:29.748 "base_bdevs_list": [ 00:17:29.748 { 00:17:29.748 "name": "BaseBdev1", 00:17:29.748 "uuid": "ef5243ca-0404-4252-b45c-fb2b630b14c0", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": "BaseBdev2", 00:17:29.748 "uuid": "f69c6aef-cef6-495f-b0af-d08c649091cd", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": "BaseBdev3", 00:17:29.748 "uuid": "dff9c2e5-2d50-483e-9a86-07fbd1b7c910", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": "BaseBdev4", 00:17:29.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.748 "is_configured": false, 00:17:29.748 "data_offset": 0, 00:17:29.748 "data_size": 0 00:17:29.748 } 00:17:29.748 ] 00:17:29.748 }' 00:17:29.748 14:19:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.748 14:19:21 -- common/autotest_common.sh@10 -- # set +x 00:17:30.315 14:19:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:30.574 [2024-11-18 14:19:22.526392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:30.574 [2024-11-18 14:19:22.526773] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:17:30.574 [2024-11-18 14:19:22.526906] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:30.574 [2024-11-18 14:19:22.527062] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:17:30.574 BaseBdev4 00:17:30.574 [2024-11-18 14:19:22.527600] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:17:30.574 [2024-11-18 14:19:22.527753] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:17:30.574 [2024-11-18 14:19:22.527991] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.574 14:19:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:30.574 14:19:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:30.574 14:19:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:30.574 14:19:22 -- common/autotest_common.sh@899 -- # local i 00:17:30.574 14:19:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:30.574 14:19:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:30.574 14:19:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.832 14:19:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:31.091 [ 00:17:31.091 { 00:17:31.091 "name": "BaseBdev4", 00:17:31.091 "aliases": [ 00:17:31.091 "3a8ed187-65ef-4306-afd3-cc9e0c393d2a" 00:17:31.091 ], 00:17:31.091 "product_name": "Malloc disk", 00:17:31.091 "block_size": 512, 00:17:31.091 "num_blocks": 65536, 00:17:31.091 "uuid": "3a8ed187-65ef-4306-afd3-cc9e0c393d2a", 00:17:31.091 "assigned_rate_limits": { 00:17:31.091 "rw_ios_per_sec": 0, 00:17:31.091 "rw_mbytes_per_sec": 0, 00:17:31.091 "r_mbytes_per_sec": 0, 00:17:31.091 "w_mbytes_per_sec": 0 00:17:31.091 }, 00:17:31.091 "claimed": true, 00:17:31.091 "claim_type": "exclusive_write", 00:17:31.091 "zoned": false, 00:17:31.091 "supported_io_types": { 00:17:31.091 "read": true, 00:17:31.091 "write": true, 00:17:31.091 "unmap": true, 00:17:31.091 "write_zeroes": true, 00:17:31.091 "flush": true, 00:17:31.091 "reset": true, 00:17:31.091 "compare": false, 00:17:31.091 "compare_and_write": false, 00:17:31.091 "abort": true, 00:17:31.091 "nvme_admin": false, 00:17:31.091 "nvme_io": false 00:17:31.091 }, 00:17:31.091 "memory_domains": [ 00:17:31.091 { 00:17:31.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.091 "dma_device_type": 2 00:17:31.091 } 00:17:31.091 ], 00:17:31.091 "driver_specific": {} 00:17:31.091 } 00:17:31.091 ] 00:17:31.091 14:19:23 -- common/autotest_common.sh@905 -- # return 0 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.091 14:19:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.349 14:19:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.349 "name": "Existed_Raid", 00:17:31.349 "uuid": "217e9c40-5233-46b3-99ca-17a25aeb5b9b", 00:17:31.349 "strip_size_kb": 0, 00:17:31.349 "state": "online", 00:17:31.349 "raid_level": "raid1", 00:17:31.349 "superblock": true, 00:17:31.349 "num_base_bdevs": 4, 00:17:31.349 "num_base_bdevs_discovered": 4, 00:17:31.349 "num_base_bdevs_operational": 4, 00:17:31.349 "base_bdevs_list": [ 00:17:31.349 { 00:17:31.349 "name": "BaseBdev1", 00:17:31.349 "uuid": "ef5243ca-0404-4252-b45c-fb2b630b14c0", 00:17:31.349 "is_configured": true, 00:17:31.349 "data_offset": 2048, 00:17:31.349 "data_size": 63488 00:17:31.349 }, 00:17:31.349 { 00:17:31.349 "name": "BaseBdev2", 00:17:31.349 "uuid": "f69c6aef-cef6-495f-b0af-d08c649091cd", 00:17:31.349 "is_configured": true, 00:17:31.349 "data_offset": 2048, 00:17:31.349 "data_size": 63488 00:17:31.349 }, 00:17:31.349 { 00:17:31.349 "name": "BaseBdev3", 00:17:31.349 "uuid": "dff9c2e5-2d50-483e-9a86-07fbd1b7c910", 00:17:31.349 "is_configured": true, 00:17:31.349 "data_offset": 2048, 00:17:31.349 "data_size": 63488 00:17:31.349 }, 00:17:31.349 { 00:17:31.349 "name": "BaseBdev4", 00:17:31.349 "uuid": "3a8ed187-65ef-4306-afd3-cc9e0c393d2a", 00:17:31.349 "is_configured": true, 00:17:31.349 "data_offset": 2048, 00:17:31.349 "data_size": 63488 00:17:31.349 } 00:17:31.349 ] 00:17:31.349 }' 00:17:31.349 14:19:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.349 14:19:23 -- common/autotest_common.sh@10 -- # set +x 00:17:31.915 14:19:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.173 [2024-11-18 14:19:24.070927] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.173 14:19:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.431 14:19:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.431 "name": "Existed_Raid", 00:17:32.431 "uuid": "217e9c40-5233-46b3-99ca-17a25aeb5b9b", 00:17:32.431 "strip_size_kb": 0, 00:17:32.431 "state": "online", 00:17:32.431 "raid_level": "raid1", 00:17:32.431 "superblock": true, 00:17:32.431 "num_base_bdevs": 4, 00:17:32.431 "num_base_bdevs_discovered": 3, 00:17:32.431 "num_base_bdevs_operational": 3, 00:17:32.431 "base_bdevs_list": [ 00:17:32.431 { 00:17:32.431 "name": null, 00:17:32.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.431 "is_configured": false, 00:17:32.431 "data_offset": 2048, 00:17:32.431 "data_size": 63488 00:17:32.431 }, 00:17:32.431 { 00:17:32.431 "name": "BaseBdev2", 00:17:32.431 "uuid": "f69c6aef-cef6-495f-b0af-d08c649091cd", 00:17:32.431 "is_configured": true, 00:17:32.431 "data_offset": 2048, 00:17:32.431 "data_size": 63488 00:17:32.431 }, 00:17:32.431 { 00:17:32.431 "name": "BaseBdev3", 00:17:32.431 "uuid": "dff9c2e5-2d50-483e-9a86-07fbd1b7c910", 00:17:32.431 "is_configured": true, 00:17:32.431 "data_offset": 2048, 00:17:32.431 "data_size": 63488 00:17:32.431 }, 00:17:32.431 { 00:17:32.431 "name": "BaseBdev4", 00:17:32.431 "uuid": "3a8ed187-65ef-4306-afd3-cc9e0c393d2a", 00:17:32.431 "is_configured": true, 00:17:32.431 "data_offset": 2048, 00:17:32.431 "data_size": 63488 00:17:32.431 } 00:17:32.431 ] 00:17:32.431 }' 00:17:32.431 14:19:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.431 14:19:24 -- common/autotest_common.sh@10 -- # set +x 00:17:32.998 14:19:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:32.998 14:19:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:32.998 14:19:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.998 14:19:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.256 14:19:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:33.256 14:19:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.256 14:19:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.515 [2024-11-18 14:19:25.456216] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.515 14:19:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:33.515 14:19:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:33.515 14:19:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.515 14:19:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:33.773 14:19:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:33.773 14:19:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.773 14:19:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:34.032 [2024-11-18 14:19:25.890124] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.032 14:19:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.032 14:19:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.032 14:19:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.032 14:19:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:34.290 14:19:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.290 14:19:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.290 14:19:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:34.290 [2024-11-18 14:19:26.348370] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:34.290 [2024-11-18 14:19:26.348511] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.290 [2024-11-18 14:19:26.348716] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.290 [2024-11-18 14:19:26.358030] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.290 [2024-11-18 14:19:26.358176] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:34.548 14:19:26 -- bdev/bdev_raid.sh@287 -- # killprocess 131313 00:17:34.548 14:19:26 -- common/autotest_common.sh@936 -- # '[' -z 131313 ']' 00:17:34.548 14:19:26 -- common/autotest_common.sh@940 -- # kill -0 131313 00:17:34.548 14:19:26 -- common/autotest_common.sh@941 -- # uname 00:17:34.548 14:19:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.548 14:19:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131313 00:17:34.807 14:19:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.807 14:19:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.807 14:19:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131313' 00:17:34.807 killing process with pid 131313 00:17:34.807 14:19:26 -- common/autotest_common.sh@955 -- # kill 131313 00:17:34.807 [2024-11-18 14:19:26.632454] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.807 14:19:26 -- common/autotest_common.sh@960 -- # wait 131313 00:17:34.807 [2024-11-18 14:19:26.632660] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.807 ************************************ 00:17:34.807 END TEST raid_state_function_test_sb 00:17:34.807 ************************************ 00:17:34.807 14:19:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:34.807 00:17:34.807 real 0m13.528s 00:17:34.807 user 0m25.143s 00:17:34.807 sys 0m1.507s 00:17:34.807 14:19:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.807 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:35.066 14:19:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:35.066 14:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.066 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:35.066 ************************************ 00:17:35.066 START TEST raid_superblock_test 00:17:35.066 ************************************ 00:17:35.066 14:19:26 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=131750 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131750 /var/tmp/spdk-raid.sock 00:17:35.066 14:19:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:35.066 14:19:26 -- common/autotest_common.sh@829 -- # '[' -z 131750 ']' 00:17:35.066 14:19:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.066 14:19:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.066 14:19:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.066 14:19:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.066 14:19:26 -- common/autotest_common.sh@10 -- # set +x 00:17:35.066 [2024-11-18 14:19:26.974117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:35.066 [2024-11-18 14:19:26.974561] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131750 ] 00:17:35.066 [2024-11-18 14:19:27.121340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.326 [2024-11-18 14:19:27.199291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.326 [2024-11-18 14:19:27.269141] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.894 14:19:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.894 14:19:27 -- common/autotest_common.sh@862 -- # return 0 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.894 14:19:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:36.153 malloc1 00:17:36.153 14:19:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.412 [2024-11-18 14:19:28.389137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.412 [2024-11-18 14:19:28.389376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.412 [2024-11-18 14:19:28.389458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:36.412 [2024-11-18 14:19:28.389604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.412 [2024-11-18 14:19:28.392316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.412 [2024-11-18 14:19:28.392491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.412 pt1 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.412 14:19:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:36.671 malloc2 00:17:36.671 14:19:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.930 [2024-11-18 14:19:28.826351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.930 [2024-11-18 14:19:28.826596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.930 [2024-11-18 14:19:28.826686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:36.930 [2024-11-18 14:19:28.826957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.930 [2024-11-18 14:19:28.829191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.930 [2024-11-18 14:19:28.829359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.930 pt2 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.930 14:19:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:37.189 malloc3 00:17:37.189 14:19:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.447 [2024-11-18 14:19:29.318712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.447 [2024-11-18 14:19:29.318912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.447 [2024-11-18 14:19:29.318986] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:37.447 [2024-11-18 14:19:29.319323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.447 [2024-11-18 14:19:29.321576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.447 [2024-11-18 14:19:29.321750] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.447 pt3 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:37.447 14:19:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:37.447 malloc4 00:17:37.706 14:19:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:37.706 [2024-11-18 14:19:29.760218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:37.706 [2024-11-18 14:19:29.760433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.706 [2024-11-18 14:19:29.760501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:37.706 [2024-11-18 14:19:29.760773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.706 [2024-11-18 14:19:29.763113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.706 [2024-11-18 14:19:29.763312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:37.706 pt4 00:17:37.706 14:19:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:37.706 14:19:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:37.706 14:19:29 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:37.965 [2024-11-18 14:19:29.952356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.965 [2024-11-18 14:19:29.954491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.965 [2024-11-18 14:19:29.954687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.965 [2024-11-18 14:19:29.954776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:37.965 [2024-11-18 14:19:29.955115] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:37.965 [2024-11-18 14:19:29.955254] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:37.965 [2024-11-18 14:19:29.955427] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:37.965 [2024-11-18 14:19:29.956022] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:37.965 [2024-11-18 14:19:29.956164] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:37.965 [2024-11-18 14:19:29.956426] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.965 14:19:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.224 14:19:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.224 "name": "raid_bdev1", 00:17:38.224 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:38.224 "strip_size_kb": 0, 00:17:38.224 "state": "online", 00:17:38.224 "raid_level": "raid1", 00:17:38.224 "superblock": true, 00:17:38.224 "num_base_bdevs": 4, 00:17:38.224 "num_base_bdevs_discovered": 4, 00:17:38.224 "num_base_bdevs_operational": 4, 00:17:38.224 "base_bdevs_list": [ 00:17:38.224 { 00:17:38.224 "name": "pt1", 00:17:38.224 "uuid": "aaf792b3-6ac5-5b84-91cc-083a0edb5415", 00:17:38.224 "is_configured": true, 00:17:38.224 "data_offset": 2048, 00:17:38.224 "data_size": 63488 00:17:38.224 }, 00:17:38.224 { 00:17:38.224 "name": "pt2", 00:17:38.224 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:38.224 "is_configured": true, 00:17:38.224 "data_offset": 2048, 00:17:38.224 "data_size": 63488 00:17:38.224 }, 00:17:38.224 { 00:17:38.224 "name": "pt3", 00:17:38.224 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:38.224 "is_configured": true, 00:17:38.224 "data_offset": 2048, 00:17:38.224 "data_size": 63488 00:17:38.224 }, 00:17:38.224 { 00:17:38.224 "name": "pt4", 00:17:38.224 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:38.224 "is_configured": true, 00:17:38.224 "data_offset": 2048, 00:17:38.224 "data_size": 63488 00:17:38.224 } 00:17:38.224 ] 00:17:38.224 }' 00:17:38.224 14:19:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.224 14:19:30 -- common/autotest_common.sh@10 -- # set +x 00:17:38.792 14:19:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.792 14:19:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:39.051 [2024-11-18 14:19:31.016779] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.051 14:19:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=40f23ba5-6ade-4606-8f08-9dcfac73858c 00:17:39.051 14:19:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 40f23ba5-6ade-4606-8f08-9dcfac73858c ']' 00:17:39.051 14:19:31 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:39.309 [2024-11-18 14:19:31.264647] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.309 [2024-11-18 14:19:31.264799] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.309 [2024-11-18 14:19:31.265017] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.309 [2024-11-18 14:19:31.265221] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.309 [2024-11-18 14:19:31.265338] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:39.309 14:19:31 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.309 14:19:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:39.568 14:19:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:39.568 14:19:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:39.568 14:19:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.568 14:19:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:39.827 14:19:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.827 14:19:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:40.086 14:19:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.086 14:19:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:40.086 14:19:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:40.086 14:19:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:40.344 14:19:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:40.344 14:19:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:40.601 14:19:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:40.601 14:19:32 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:40.601 14:19:32 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.601 14:19:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:40.601 14:19:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.601 14:19:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.601 14:19:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.601 14:19:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.601 14:19:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.601 14:19:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.601 14:19:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:40.601 14:19:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:40.602 14:19:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:40.859 [2024-11-18 14:19:32.808893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.859 [2024-11-18 14:19:32.811136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.859 [2024-11-18 14:19:32.811420] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:40.859 [2024-11-18 14:19:32.811669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:40.859 [2024-11-18 14:19:32.811858] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:40.859 [2024-11-18 14:19:32.812058] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:40.859 [2024-11-18 14:19:32.812136] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:40.859 [2024-11-18 14:19:32.812284] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:40.860 [2024-11-18 14:19:32.812703] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.860 [2024-11-18 14:19:32.812824] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:17:40.860 request: 00:17:40.860 { 00:17:40.860 "name": "raid_bdev1", 00:17:40.860 "raid_level": "raid1", 00:17:40.860 "base_bdevs": [ 00:17:40.860 "malloc1", 00:17:40.860 "malloc2", 00:17:40.860 "malloc3", 00:17:40.860 "malloc4" 00:17:40.860 ], 00:17:40.860 "superblock": false, 00:17:40.860 "method": "bdev_raid_create", 00:17:40.860 "req_id": 1 00:17:40.860 } 00:17:40.860 Got JSON-RPC error response 00:17:40.860 response: 00:17:40.860 { 00:17:40.860 "code": -17, 00:17:40.860 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.860 } 00:17:40.860 14:19:32 -- common/autotest_common.sh@653 -- # es=1 00:17:40.860 14:19:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.860 14:19:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.860 14:19:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.860 14:19:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.860 14:19:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:41.118 14:19:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:41.118 14:19:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:41.118 14:19:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:41.376 [2024-11-18 14:19:33.269198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:41.376 [2024-11-18 14:19:33.269421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.376 [2024-11-18 14:19:33.269586] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:41.376 [2024-11-18 14:19:33.269721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.376 [2024-11-18 14:19:33.272164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.376 [2024-11-18 14:19:33.272371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:41.376 [2024-11-18 14:19:33.272582] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:41.376 [2024-11-18 14:19:33.272770] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.376 pt1 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.376 14:19:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.636 14:19:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.636 "name": "raid_bdev1", 00:17:41.636 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:41.636 "strip_size_kb": 0, 00:17:41.636 "state": "configuring", 00:17:41.636 "raid_level": "raid1", 00:17:41.636 "superblock": true, 00:17:41.636 "num_base_bdevs": 4, 00:17:41.636 "num_base_bdevs_discovered": 1, 00:17:41.636 "num_base_bdevs_operational": 4, 00:17:41.636 "base_bdevs_list": [ 00:17:41.636 { 00:17:41.636 "name": "pt1", 00:17:41.636 "uuid": "aaf792b3-6ac5-5b84-91cc-083a0edb5415", 00:17:41.636 "is_configured": true, 00:17:41.636 "data_offset": 2048, 00:17:41.636 "data_size": 63488 00:17:41.636 }, 00:17:41.636 { 00:17:41.636 "name": null, 00:17:41.636 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:41.636 "is_configured": false, 00:17:41.636 "data_offset": 2048, 00:17:41.636 "data_size": 63488 00:17:41.636 }, 00:17:41.636 { 00:17:41.636 "name": null, 00:17:41.636 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:41.636 "is_configured": false, 00:17:41.636 "data_offset": 2048, 00:17:41.636 "data_size": 63488 00:17:41.636 }, 00:17:41.636 { 00:17:41.636 "name": null, 00:17:41.636 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:41.636 "is_configured": false, 00:17:41.636 "data_offset": 2048, 00:17:41.636 "data_size": 63488 00:17:41.636 } 00:17:41.636 ] 00:17:41.636 }' 00:17:41.636 14:19:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.636 14:19:33 -- common/autotest_common.sh@10 -- # set +x 00:17:42.203 14:19:34 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:42.203 14:19:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:42.462 [2024-11-18 14:19:34.337378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:42.462 [2024-11-18 14:19:34.337588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.462 [2024-11-18 14:19:34.337677] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:42.462 [2024-11-18 14:19:34.337993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.462 [2024-11-18 14:19:34.338424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.462 [2024-11-18 14:19:34.338818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:42.462 [2024-11-18 14:19:34.339049] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:42.462 [2024-11-18 14:19:34.339220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:42.462 pt2 00:17:42.462 14:19:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:42.462 [2024-11-18 14:19:34.521411] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.720 "name": "raid_bdev1", 00:17:42.720 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:42.720 "strip_size_kb": 0, 00:17:42.720 "state": "configuring", 00:17:42.720 "raid_level": "raid1", 00:17:42.720 "superblock": true, 00:17:42.720 "num_base_bdevs": 4, 00:17:42.720 "num_base_bdevs_discovered": 1, 00:17:42.720 "num_base_bdevs_operational": 4, 00:17:42.720 "base_bdevs_list": [ 00:17:42.720 { 00:17:42.720 "name": "pt1", 00:17:42.720 "uuid": "aaf792b3-6ac5-5b84-91cc-083a0edb5415", 00:17:42.720 "is_configured": true, 00:17:42.720 "data_offset": 2048, 00:17:42.720 "data_size": 63488 00:17:42.720 }, 00:17:42.720 { 00:17:42.720 "name": null, 00:17:42.720 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:42.720 "is_configured": false, 00:17:42.720 "data_offset": 2048, 00:17:42.720 "data_size": 63488 00:17:42.720 }, 00:17:42.720 { 00:17:42.720 "name": null, 00:17:42.720 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:42.720 "is_configured": false, 00:17:42.720 "data_offset": 2048, 00:17:42.720 "data_size": 63488 00:17:42.720 }, 00:17:42.720 { 00:17:42.720 "name": null, 00:17:42.720 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:42.720 "is_configured": false, 00:17:42.720 "data_offset": 2048, 00:17:42.720 "data_size": 63488 00:17:42.720 } 00:17:42.720 ] 00:17:42.720 }' 00:17:42.720 14:19:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.720 14:19:34 -- common/autotest_common.sh@10 -- # set +x 00:17:43.288 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:43.288 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:43.288 14:19:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:43.547 [2024-11-18 14:19:35.537565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:43.547 [2024-11-18 14:19:35.537774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.547 [2024-11-18 14:19:35.537855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:43.547 [2024-11-18 14:19:35.538102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.547 [2024-11-18 14:19:35.538521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.547 [2024-11-18 14:19:35.538612] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:43.547 [2024-11-18 14:19:35.538710] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:43.547 [2024-11-18 14:19:35.538764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:43.547 pt2 00:17:43.547 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:43.547 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:43.547 14:19:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:43.849 [2024-11-18 14:19:35.733632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:43.849 [2024-11-18 14:19:35.733854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.849 [2024-11-18 14:19:35.733932] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:43.849 [2024-11-18 14:19:35.734176] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.849 [2024-11-18 14:19:35.734591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.849 [2024-11-18 14:19:35.734779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:43.849 [2024-11-18 14:19:35.734963] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:43.849 [2024-11-18 14:19:35.735103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:43.849 pt3 00:17:43.849 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:43.849 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:43.849 14:19:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:44.133 [2024-11-18 14:19:35.969655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:44.133 [2024-11-18 14:19:35.969862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.133 [2024-11-18 14:19:35.969938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:44.133 [2024-11-18 14:19:35.970071] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.133 [2024-11-18 14:19:35.970502] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.133 [2024-11-18 14:19:35.970680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:44.133 [2024-11-18 14:19:35.970880] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:44.133 [2024-11-18 14:19:35.971005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:44.133 [2024-11-18 14:19:35.971277] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:44.133 [2024-11-18 14:19:35.971381] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:44.133 [2024-11-18 14:19:35.971513] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:17:44.133 [2024-11-18 14:19:35.971902] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:44.133 [2024-11-18 14:19:35.971949] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:44.133 [2024-11-18 14:19:35.972073] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.133 pt4 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.133 14:19:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.393 14:19:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.393 "name": "raid_bdev1", 00:17:44.393 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:44.393 "strip_size_kb": 0, 00:17:44.393 "state": "online", 00:17:44.393 "raid_level": "raid1", 00:17:44.393 "superblock": true, 00:17:44.393 "num_base_bdevs": 4, 00:17:44.393 "num_base_bdevs_discovered": 4, 00:17:44.393 "num_base_bdevs_operational": 4, 00:17:44.393 "base_bdevs_list": [ 00:17:44.393 { 00:17:44.393 "name": "pt1", 00:17:44.393 "uuid": "aaf792b3-6ac5-5b84-91cc-083a0edb5415", 00:17:44.393 "is_configured": true, 00:17:44.393 "data_offset": 2048, 00:17:44.393 "data_size": 63488 00:17:44.393 }, 00:17:44.393 { 00:17:44.393 "name": "pt2", 00:17:44.393 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:44.393 "is_configured": true, 00:17:44.393 "data_offset": 2048, 00:17:44.393 "data_size": 63488 00:17:44.393 }, 00:17:44.393 { 00:17:44.393 "name": "pt3", 00:17:44.393 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:44.393 "is_configured": true, 00:17:44.393 "data_offset": 2048, 00:17:44.393 "data_size": 63488 00:17:44.393 }, 00:17:44.393 { 00:17:44.393 "name": "pt4", 00:17:44.393 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:44.393 "is_configured": true, 00:17:44.393 "data_offset": 2048, 00:17:44.393 "data_size": 63488 00:17:44.393 } 00:17:44.393 ] 00:17:44.393 }' 00:17:44.393 14:19:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.393 14:19:36 -- common/autotest_common.sh@10 -- # set +x 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:44.960 [2024-11-18 14:19:36.987248] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@430 -- # '[' 40f23ba5-6ade-4606-8f08-9dcfac73858c '!=' 40f23ba5-6ade-4606-8f08-9dcfac73858c ']' 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:44.960 14:19:36 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:45.218 [2024-11-18 14:19:37.175113] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.218 14:19:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.477 14:19:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.477 "name": "raid_bdev1", 00:17:45.477 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:45.477 "strip_size_kb": 0, 00:17:45.477 "state": "online", 00:17:45.477 "raid_level": "raid1", 00:17:45.477 "superblock": true, 00:17:45.477 "num_base_bdevs": 4, 00:17:45.477 "num_base_bdevs_discovered": 3, 00:17:45.477 "num_base_bdevs_operational": 3, 00:17:45.477 "base_bdevs_list": [ 00:17:45.477 { 00:17:45.477 "name": null, 00:17:45.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.477 "is_configured": false, 00:17:45.477 "data_offset": 2048, 00:17:45.477 "data_size": 63488 00:17:45.477 }, 00:17:45.477 { 00:17:45.477 "name": "pt2", 00:17:45.477 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:45.477 "is_configured": true, 00:17:45.477 "data_offset": 2048, 00:17:45.477 "data_size": 63488 00:17:45.477 }, 00:17:45.477 { 00:17:45.477 "name": "pt3", 00:17:45.477 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:45.477 "is_configured": true, 00:17:45.477 "data_offset": 2048, 00:17:45.477 "data_size": 63488 00:17:45.477 }, 00:17:45.477 { 00:17:45.477 "name": "pt4", 00:17:45.477 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:45.477 "is_configured": true, 00:17:45.477 "data_offset": 2048, 00:17:45.477 "data_size": 63488 00:17:45.477 } 00:17:45.477 ] 00:17:45.477 }' 00:17:45.477 14:19:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.477 14:19:37 -- common/autotest_common.sh@10 -- # set +x 00:17:46.046 14:19:37 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:46.305 [2024-11-18 14:19:38.227293] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.305 [2024-11-18 14:19:38.227456] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.305 [2024-11-18 14:19:38.227616] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.305 [2024-11-18 14:19:38.227798] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.305 [2024-11-18 14:19:38.227917] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:17:46.305 14:19:38 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.305 14:19:38 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:46.563 14:19:38 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:46.822 14:19:38 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:46.822 14:19:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:46.822 14:19:38 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:47.081 14:19:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:47.081 14:19:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:47.081 14:19:39 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:47.081 14:19:39 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:47.081 14:19:39 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.339 [2024-11-18 14:19:39.227422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.339 [2024-11-18 14:19:39.227630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.339 [2024-11-18 14:19:39.227703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:47.339 [2024-11-18 14:19:39.228003] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.339 [2024-11-18 14:19:39.229957] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.339 [2024-11-18 14:19:39.230153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.339 [2024-11-18 14:19:39.230335] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:47.339 [2024-11-18 14:19:39.230489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.339 pt2 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.339 14:19:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.597 14:19:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.597 "name": "raid_bdev1", 00:17:47.597 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:47.597 "strip_size_kb": 0, 00:17:47.597 "state": "configuring", 00:17:47.597 "raid_level": "raid1", 00:17:47.597 "superblock": true, 00:17:47.597 "num_base_bdevs": 4, 00:17:47.597 "num_base_bdevs_discovered": 1, 00:17:47.597 "num_base_bdevs_operational": 3, 00:17:47.597 "base_bdevs_list": [ 00:17:47.597 { 00:17:47.597 "name": null, 00:17:47.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.597 "is_configured": false, 00:17:47.597 "data_offset": 2048, 00:17:47.597 "data_size": 63488 00:17:47.597 }, 00:17:47.597 { 00:17:47.597 "name": "pt2", 00:17:47.597 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:47.597 "is_configured": true, 00:17:47.597 "data_offset": 2048, 00:17:47.597 "data_size": 63488 00:17:47.597 }, 00:17:47.597 { 00:17:47.597 "name": null, 00:17:47.597 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:47.597 "is_configured": false, 00:17:47.597 "data_offset": 2048, 00:17:47.597 "data_size": 63488 00:17:47.597 }, 00:17:47.597 { 00:17:47.597 "name": null, 00:17:47.597 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:47.597 "is_configured": false, 00:17:47.597 "data_offset": 2048, 00:17:47.597 "data_size": 63488 00:17:47.597 } 00:17:47.597 ] 00:17:47.597 }' 00:17:47.597 14:19:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.597 14:19:39 -- common/autotest_common.sh@10 -- # set +x 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:48.164 [2024-11-18 14:19:40.195586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:48.164 [2024-11-18 14:19:40.195790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.164 [2024-11-18 14:19:40.195872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:48.164 [2024-11-18 14:19:40.196115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.164 [2024-11-18 14:19:40.196497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.164 [2024-11-18 14:19:40.196577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:48.164 [2024-11-18 14:19:40.196677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:48.164 [2024-11-18 14:19:40.196733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:48.164 pt3 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.164 14:19:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.423 14:19:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.423 "name": "raid_bdev1", 00:17:48.423 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:48.423 "strip_size_kb": 0, 00:17:48.423 "state": "configuring", 00:17:48.423 "raid_level": "raid1", 00:17:48.423 "superblock": true, 00:17:48.423 "num_base_bdevs": 4, 00:17:48.423 "num_base_bdevs_discovered": 2, 00:17:48.423 "num_base_bdevs_operational": 3, 00:17:48.423 "base_bdevs_list": [ 00:17:48.423 { 00:17:48.423 "name": null, 00:17:48.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.423 "is_configured": false, 00:17:48.423 "data_offset": 2048, 00:17:48.423 "data_size": 63488 00:17:48.423 }, 00:17:48.423 { 00:17:48.423 "name": "pt2", 00:17:48.423 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:48.423 "is_configured": true, 00:17:48.423 "data_offset": 2048, 00:17:48.423 "data_size": 63488 00:17:48.423 }, 00:17:48.423 { 00:17:48.423 "name": "pt3", 00:17:48.423 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:48.423 "is_configured": true, 00:17:48.423 "data_offset": 2048, 00:17:48.423 "data_size": 63488 00:17:48.423 }, 00:17:48.423 { 00:17:48.423 "name": null, 00:17:48.423 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:48.423 "is_configured": false, 00:17:48.423 "data_offset": 2048, 00:17:48.423 "data_size": 63488 00:17:48.423 } 00:17:48.423 ] 00:17:48.423 }' 00:17:48.423 14:19:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.423 14:19:40 -- common/autotest_common.sh@10 -- # set +x 00:17:48.991 14:19:40 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:48.991 14:19:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:48.991 14:19:40 -- bdev/bdev_raid.sh@462 -- # i=3 00:17:48.991 14:19:40 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:49.249 [2024-11-18 14:19:41.127977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:49.249 [2024-11-18 14:19:41.128187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.249 [2024-11-18 14:19:41.128268] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:49.249 [2024-11-18 14:19:41.128557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.249 [2024-11-18 14:19:41.128960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.249 [2024-11-18 14:19:41.129119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:49.249 [2024-11-18 14:19:41.129320] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:49.249 [2024-11-18 14:19:41.129444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:49.249 [2024-11-18 14:19:41.129665] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:17:49.250 [2024-11-18 14:19:41.129786] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:49.250 [2024-11-18 14:19:41.129903] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:17:49.250 [2024-11-18 14:19:41.130326] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:17:49.250 [2024-11-18 14:19:41.130477] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:17:49.250 [2024-11-18 14:19:41.130674] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.250 pt4 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.250 14:19:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.509 14:19:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.509 "name": "raid_bdev1", 00:17:49.509 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:49.509 "strip_size_kb": 0, 00:17:49.509 "state": "online", 00:17:49.509 "raid_level": "raid1", 00:17:49.509 "superblock": true, 00:17:49.509 "num_base_bdevs": 4, 00:17:49.509 "num_base_bdevs_discovered": 3, 00:17:49.509 "num_base_bdevs_operational": 3, 00:17:49.509 "base_bdevs_list": [ 00:17:49.509 { 00:17:49.509 "name": null, 00:17:49.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.509 "is_configured": false, 00:17:49.509 "data_offset": 2048, 00:17:49.509 "data_size": 63488 00:17:49.509 }, 00:17:49.509 { 00:17:49.509 "name": "pt2", 00:17:49.509 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:49.509 "is_configured": true, 00:17:49.509 "data_offset": 2048, 00:17:49.509 "data_size": 63488 00:17:49.509 }, 00:17:49.509 { 00:17:49.509 "name": "pt3", 00:17:49.509 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:49.509 "is_configured": true, 00:17:49.509 "data_offset": 2048, 00:17:49.509 "data_size": 63488 00:17:49.509 }, 00:17:49.509 { 00:17:49.509 "name": "pt4", 00:17:49.509 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:49.509 "is_configured": true, 00:17:49.509 "data_offset": 2048, 00:17:49.509 "data_size": 63488 00:17:49.509 } 00:17:49.509 ] 00:17:49.509 }' 00:17:49.509 14:19:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.509 14:19:41 -- common/autotest_common.sh@10 -- # set +x 00:17:50.077 14:19:41 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:17:50.077 14:19:41 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.335 [2024-11-18 14:19:42.220169] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.335 [2024-11-18 14:19:42.220330] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.335 [2024-11-18 14:19:42.220478] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.335 [2024-11-18 14:19:42.220654] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.335 [2024-11-18 14:19:42.220774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:17:50.335 14:19:42 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.335 14:19:42 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.593 [2024-11-18 14:19:42.596222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.593 [2024-11-18 14:19:42.596418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.593 [2024-11-18 14:19:42.596508] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:50.593 [2024-11-18 14:19:42.596807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.593 [2024-11-18 14:19:42.598791] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.593 [2024-11-18 14:19:42.598968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.593 [2024-11-18 14:19:42.599132] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:50.593 [2024-11-18 14:19:42.599300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.593 pt1 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.593 14:19:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.851 14:19:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:50.851 "name": "raid_bdev1", 00:17:50.851 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:50.851 "strip_size_kb": 0, 00:17:50.851 "state": "configuring", 00:17:50.851 "raid_level": "raid1", 00:17:50.851 "superblock": true, 00:17:50.851 "num_base_bdevs": 4, 00:17:50.851 "num_base_bdevs_discovered": 1, 00:17:50.851 "num_base_bdevs_operational": 4, 00:17:50.851 "base_bdevs_list": [ 00:17:50.851 { 00:17:50.851 "name": "pt1", 00:17:50.851 "uuid": "aaf792b3-6ac5-5b84-91cc-083a0edb5415", 00:17:50.851 "is_configured": true, 00:17:50.851 "data_offset": 2048, 00:17:50.851 "data_size": 63488 00:17:50.851 }, 00:17:50.851 { 00:17:50.851 "name": null, 00:17:50.851 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:50.851 "is_configured": false, 00:17:50.851 "data_offset": 2048, 00:17:50.851 "data_size": 63488 00:17:50.851 }, 00:17:50.851 { 00:17:50.851 "name": null, 00:17:50.851 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:50.851 "is_configured": false, 00:17:50.851 "data_offset": 2048, 00:17:50.851 "data_size": 63488 00:17:50.851 }, 00:17:50.851 { 00:17:50.851 "name": null, 00:17:50.851 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:50.851 "is_configured": false, 00:17:50.851 "data_offset": 2048, 00:17:50.851 "data_size": 63488 00:17:50.851 } 00:17:50.851 ] 00:17:50.851 }' 00:17:50.851 14:19:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:50.851 14:19:42 -- common/autotest_common.sh@10 -- # set +x 00:17:51.418 14:19:43 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:51.418 14:19:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:51.418 14:19:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:51.676 14:19:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:51.676 14:19:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:51.676 14:19:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:51.935 14:19:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:51.935 14:19:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:51.935 14:19:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@489 -- # i=3 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:52.194 [2024-11-18 14:19:44.244528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:52.194 [2024-11-18 14:19:44.244735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.194 [2024-11-18 14:19:44.244812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:52.194 [2024-11-18 14:19:44.245117] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.194 [2024-11-18 14:19:44.245505] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.194 [2024-11-18 14:19:44.245902] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:52.194 [2024-11-18 14:19:44.246105] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:52.194 [2024-11-18 14:19:44.246247] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:52.194 [2024-11-18 14:19:44.246352] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.194 [2024-11-18 14:19:44.246452] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:17:52.194 [2024-11-18 14:19:44.246721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:52.194 pt4 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.194 14:19:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.453 14:19:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.453 "name": "raid_bdev1", 00:17:52.453 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:52.453 "strip_size_kb": 0, 00:17:52.453 "state": "configuring", 00:17:52.453 "raid_level": "raid1", 00:17:52.453 "superblock": true, 00:17:52.453 "num_base_bdevs": 4, 00:17:52.453 "num_base_bdevs_discovered": 1, 00:17:52.453 "num_base_bdevs_operational": 3, 00:17:52.453 "base_bdevs_list": [ 00:17:52.453 { 00:17:52.453 "name": null, 00:17:52.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.453 "is_configured": false, 00:17:52.453 "data_offset": 2048, 00:17:52.453 "data_size": 63488 00:17:52.453 }, 00:17:52.453 { 00:17:52.453 "name": null, 00:17:52.453 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:52.453 "is_configured": false, 00:17:52.453 "data_offset": 2048, 00:17:52.453 "data_size": 63488 00:17:52.453 }, 00:17:52.453 { 00:17:52.453 "name": null, 00:17:52.453 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:52.453 "is_configured": false, 00:17:52.453 "data_offset": 2048, 00:17:52.453 "data_size": 63488 00:17:52.453 }, 00:17:52.453 { 00:17:52.453 "name": "pt4", 00:17:52.453 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:52.453 "is_configured": true, 00:17:52.453 "data_offset": 2048, 00:17:52.453 "data_size": 63488 00:17:52.453 } 00:17:52.453 ] 00:17:52.453 }' 00:17:52.453 14:19:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.453 14:19:44 -- common/autotest_common.sh@10 -- # set +x 00:17:53.390 14:19:45 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:53.390 14:19:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:53.390 14:19:45 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.390 [2024-11-18 14:19:45.300721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.390 [2024-11-18 14:19:45.300925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.390 [2024-11-18 14:19:45.300999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:53.390 [2024-11-18 14:19:45.301309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.390 [2024-11-18 14:19:45.301719] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.390 [2024-11-18 14:19:45.302129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.390 [2024-11-18 14:19:45.302346] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:53.390 [2024-11-18 14:19:45.302506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.390 pt2 00:17:53.390 14:19:45 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:53.390 14:19:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:53.390 14:19:45 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.649 [2024-11-18 14:19:45.488749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.650 [2024-11-18 14:19:45.488960] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.650 [2024-11-18 14:19:45.489035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:53.650 [2024-11-18 14:19:45.489258] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.650 [2024-11-18 14:19:45.489618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.650 [2024-11-18 14:19:45.489805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.650 [2024-11-18 14:19:45.490023] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:53.650 [2024-11-18 14:19:45.490137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:53.650 [2024-11-18 14:19:45.490303] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:17:53.650 [2024-11-18 14:19:45.490429] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:53.650 [2024-11-18 14:19:45.490543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:17:53.650 [2024-11-18 14:19:45.491063] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:17:53.650 [2024-11-18 14:19:45.491223] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:17:53.650 [2024-11-18 14:19:45.491412] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.650 pt3 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.650 "name": "raid_bdev1", 00:17:53.650 "uuid": "40f23ba5-6ade-4606-8f08-9dcfac73858c", 00:17:53.650 "strip_size_kb": 0, 00:17:53.650 "state": "online", 00:17:53.650 "raid_level": "raid1", 00:17:53.650 "superblock": true, 00:17:53.650 "num_base_bdevs": 4, 00:17:53.650 "num_base_bdevs_discovered": 3, 00:17:53.650 "num_base_bdevs_operational": 3, 00:17:53.650 "base_bdevs_list": [ 00:17:53.650 { 00:17:53.650 "name": null, 00:17:53.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.650 "is_configured": false, 00:17:53.650 "data_offset": 2048, 00:17:53.650 "data_size": 63488 00:17:53.650 }, 00:17:53.650 { 00:17:53.650 "name": "pt2", 00:17:53.650 "uuid": "fa73b38f-fa3c-5eca-8616-b4bbb6ea5eb0", 00:17:53.650 "is_configured": true, 00:17:53.650 "data_offset": 2048, 00:17:53.650 "data_size": 63488 00:17:53.650 }, 00:17:53.650 { 00:17:53.650 "name": "pt3", 00:17:53.650 "uuid": "1c65d08c-c53e-5992-986e-203366a16551", 00:17:53.650 "is_configured": true, 00:17:53.650 "data_offset": 2048, 00:17:53.650 "data_size": 63488 00:17:53.650 }, 00:17:53.650 { 00:17:53.650 "name": "pt4", 00:17:53.650 "uuid": "96e6e2b1-7510-582a-a046-c57f07bf229f", 00:17:53.650 "is_configured": true, 00:17:53.650 "data_offset": 2048, 00:17:53.650 "data_size": 63488 00:17:53.650 } 00:17:53.650 ] 00:17:53.650 }' 00:17:53.650 14:19:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.650 14:19:45 -- common/autotest_common.sh@10 -- # set +x 00:17:54.587 14:19:46 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:54.587 14:19:46 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:54.587 [2024-11-18 14:19:46.581103] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.587 14:19:46 -- bdev/bdev_raid.sh@506 -- # '[' 40f23ba5-6ade-4606-8f08-9dcfac73858c '!=' 40f23ba5-6ade-4606-8f08-9dcfac73858c ']' 00:17:54.587 14:19:46 -- bdev/bdev_raid.sh@511 -- # killprocess 131750 00:17:54.587 14:19:46 -- common/autotest_common.sh@936 -- # '[' -z 131750 ']' 00:17:54.587 14:19:46 -- common/autotest_common.sh@940 -- # kill -0 131750 00:17:54.587 14:19:46 -- common/autotest_common.sh@941 -- # uname 00:17:54.587 14:19:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.587 14:19:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131750 00:17:54.587 14:19:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:54.587 14:19:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:54.587 14:19:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131750' 00:17:54.587 killing process with pid 131750 00:17:54.587 14:19:46 -- common/autotest_common.sh@955 -- # kill 131750 00:17:54.587 [2024-11-18 14:19:46.628378] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.587 14:19:46 -- common/autotest_common.sh@960 -- # wait 131750 00:17:54.587 [2024-11-18 14:19:46.628606] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.587 [2024-11-18 14:19:46.628838] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.587 [2024-11-18 14:19:46.628993] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:17:54.846 [2024-11-18 14:19:46.679258] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.105 ************************************ 00:17:55.105 END TEST raid_superblock_test 00:17:55.105 ************************************ 00:17:55.105 14:19:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:55.105 00:17:55.105 real 0m20.057s 00:17:55.105 user 0m37.797s 00:17:55.105 sys 0m2.323s 00:17:55.105 14:19:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:55.105 14:19:46 -- common/autotest_common.sh@10 -- # set +x 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:17:55.105 14:19:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:55.105 14:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:55.105 14:19:47 -- common/autotest_common.sh@10 -- # set +x 00:17:55.105 ************************************ 00:17:55.105 START TEST raid_rebuild_test 00:17:55.105 ************************************ 00:17:55.105 14:19:47 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=132402 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132402 /var/tmp/spdk-raid.sock 00:17:55.105 14:19:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:55.105 14:19:47 -- common/autotest_common.sh@829 -- # '[' -z 132402 ']' 00:17:55.105 14:19:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:55.105 14:19:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.105 14:19:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:55.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:55.105 14:19:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.105 14:19:47 -- common/autotest_common.sh@10 -- # set +x 00:17:55.105 [2024-11-18 14:19:47.102828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:55.105 [2024-11-18 14:19:47.103257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132402 ] 00:17:55.105 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:55.105 Zero copy mechanism will not be used. 00:17:55.364 [2024-11-18 14:19:47.255426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.364 [2024-11-18 14:19:47.338650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.364 [2024-11-18 14:19:47.419021] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.302 14:19:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.302 14:19:48 -- common/autotest_common.sh@862 -- # return 0 00:17:56.302 14:19:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:17:56.302 14:19:48 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:17:56.302 14:19:48 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.302 BaseBdev1 00:17:56.302 14:19:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:17:56.302 14:19:48 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:17:56.302 14:19:48 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.561 BaseBdev2 00:17:56.561 14:19:48 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:17:56.820 spare_malloc 00:17:56.821 14:19:48 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:57.080 spare_delay 00:17:57.080 14:19:49 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:57.338 [2024-11-18 14:19:49.181790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:57.338 [2024-11-18 14:19:49.182101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.338 [2024-11-18 14:19:49.182195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:57.338 [2024-11-18 14:19:49.182549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.338 [2024-11-18 14:19:49.185051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.338 [2024-11-18 14:19:49.185230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:57.338 spare 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:57.338 [2024-11-18 14:19:49.365876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.338 [2024-11-18 14:19:49.368033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.338 [2024-11-18 14:19:49.368241] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:57.338 [2024-11-18 14:19:49.368360] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:57.338 [2024-11-18 14:19:49.368542] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:57.338 [2024-11-18 14:19:49.369022] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:57.338 [2024-11-18 14:19:49.369156] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:17:57.338 [2024-11-18 14:19:49.369403] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.338 14:19:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.596 14:19:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.596 "name": "raid_bdev1", 00:17:57.596 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:17:57.596 "strip_size_kb": 0, 00:17:57.596 "state": "online", 00:17:57.596 "raid_level": "raid1", 00:17:57.596 "superblock": false, 00:17:57.596 "num_base_bdevs": 2, 00:17:57.596 "num_base_bdevs_discovered": 2, 00:17:57.597 "num_base_bdevs_operational": 2, 00:17:57.597 "base_bdevs_list": [ 00:17:57.597 { 00:17:57.597 "name": "BaseBdev1", 00:17:57.597 "uuid": "222df5bc-88ec-44e0-8eb6-3ff0a6958504", 00:17:57.597 "is_configured": true, 00:17:57.597 "data_offset": 0, 00:17:57.597 "data_size": 65536 00:17:57.597 }, 00:17:57.597 { 00:17:57.597 "name": "BaseBdev2", 00:17:57.597 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:17:57.597 "is_configured": true, 00:17:57.597 "data_offset": 0, 00:17:57.597 "data_size": 65536 00:17:57.597 } 00:17:57.597 ] 00:17:57.597 }' 00:17:57.597 14:19:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.597 14:19:49 -- common/autotest_common.sh@10 -- # set +x 00:17:58.163 14:19:50 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:58.163 14:19:50 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:17:58.422 [2024-11-18 14:19:50.390199] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.422 14:19:50 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:17:58.422 14:19:50 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.422 14:19:50 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:58.680 14:19:50 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:17:58.680 14:19:50 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:17:58.680 14:19:50 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:17:58.680 14:19:50 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@12 -- # local i 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:58.680 14:19:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:58.939 [2024-11-18 14:19:50.822139] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:58.939 /dev/nbd0 00:17:58.939 14:19:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:58.939 14:19:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:58.939 14:19:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:17:58.939 14:19:50 -- common/autotest_common.sh@867 -- # local i 00:17:58.939 14:19:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:17:58.939 14:19:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:17:58.939 14:19:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:17:58.939 14:19:50 -- common/autotest_common.sh@871 -- # break 00:17:58.939 14:19:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:17:58.939 14:19:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:17:58.939 14:19:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:58.939 1+0 records in 00:17:58.939 1+0 records out 00:17:58.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350332 s, 11.7 MB/s 00:17:58.939 14:19:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.939 14:19:50 -- common/autotest_common.sh@884 -- # size=4096 00:17:58.939 14:19:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:58.939 14:19:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:17:58.939 14:19:50 -- common/autotest_common.sh@887 -- # return 0 00:17:58.939 14:19:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:58.939 14:19:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:58.939 14:19:50 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:17:58.939 14:19:50 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:17:58.939 14:19:50 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:18:03.131 65536+0 records in 00:18:03.132 65536+0 records out 00:18:03.132 33554432 bytes (34 MB, 32 MiB) copied, 4.16822 s, 8.1 MB/s 00:18:03.132 14:19:55 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:03.132 14:19:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:03.132 14:19:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:03.132 14:19:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:03.132 14:19:55 -- bdev/nbd_common.sh@51 -- # local i 00:18:03.132 14:19:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:03.132 14:19:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:03.391 [2024-11-18 14:19:55.303635] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@41 -- # break 00:18:03.391 14:19:55 -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.391 14:19:55 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:03.651 [2024-11-18 14:19:55.475364] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.651 14:19:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.910 14:19:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.910 "name": "raid_bdev1", 00:18:03.910 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:03.910 "strip_size_kb": 0, 00:18:03.910 "state": "online", 00:18:03.910 "raid_level": "raid1", 00:18:03.910 "superblock": false, 00:18:03.910 "num_base_bdevs": 2, 00:18:03.910 "num_base_bdevs_discovered": 1, 00:18:03.910 "num_base_bdevs_operational": 1, 00:18:03.910 "base_bdevs_list": [ 00:18:03.910 { 00:18:03.910 "name": null, 00:18:03.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.910 "is_configured": false, 00:18:03.910 "data_offset": 0, 00:18:03.910 "data_size": 65536 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "name": "BaseBdev2", 00:18:03.910 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:03.910 "is_configured": true, 00:18:03.910 "data_offset": 0, 00:18:03.910 "data_size": 65536 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 }' 00:18:03.910 14:19:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.910 14:19:55 -- common/autotest_common.sh@10 -- # set +x 00:18:04.478 14:19:56 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.737 [2024-11-18 14:19:56.587537] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:04.737 [2024-11-18 14:19:56.587707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.737 [2024-11-18 14:19:56.594645] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d05ee0 00:18:04.737 [2024-11-18 14:19:56.596871] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.737 14:19:56 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.671 14:19:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.928 14:19:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:05.928 "name": "raid_bdev1", 00:18:05.928 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:05.928 "strip_size_kb": 0, 00:18:05.928 "state": "online", 00:18:05.928 "raid_level": "raid1", 00:18:05.928 "superblock": false, 00:18:05.928 "num_base_bdevs": 2, 00:18:05.928 "num_base_bdevs_discovered": 2, 00:18:05.928 "num_base_bdevs_operational": 2, 00:18:05.928 "process": { 00:18:05.928 "type": "rebuild", 00:18:05.928 "target": "spare", 00:18:05.928 "progress": { 00:18:05.928 "blocks": 24576, 00:18:05.928 "percent": 37 00:18:05.928 } 00:18:05.928 }, 00:18:05.928 "base_bdevs_list": [ 00:18:05.928 { 00:18:05.928 "name": "spare", 00:18:05.928 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:05.928 "is_configured": true, 00:18:05.928 "data_offset": 0, 00:18:05.928 "data_size": 65536 00:18:05.928 }, 00:18:05.928 { 00:18:05.928 "name": "BaseBdev2", 00:18:05.928 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:05.928 "is_configured": true, 00:18:05.928 "data_offset": 0, 00:18:05.928 "data_size": 65536 00:18:05.928 } 00:18:05.928 ] 00:18:05.928 }' 00:18:05.928 14:19:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:05.928 14:19:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.928 14:19:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:05.929 14:19:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.929 14:19:57 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:06.187 [2024-11-18 14:19:58.179088] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.187 [2024-11-18 14:19:58.207069] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:06.187 [2024-11-18 14:19:58.207169] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.187 14:19:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.446 14:19:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.446 "name": "raid_bdev1", 00:18:06.446 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:06.446 "strip_size_kb": 0, 00:18:06.446 "state": "online", 00:18:06.446 "raid_level": "raid1", 00:18:06.446 "superblock": false, 00:18:06.446 "num_base_bdevs": 2, 00:18:06.446 "num_base_bdevs_discovered": 1, 00:18:06.446 "num_base_bdevs_operational": 1, 00:18:06.446 "base_bdevs_list": [ 00:18:06.446 { 00:18:06.446 "name": null, 00:18:06.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.446 "is_configured": false, 00:18:06.446 "data_offset": 0, 00:18:06.446 "data_size": 65536 00:18:06.446 }, 00:18:06.446 { 00:18:06.446 "name": "BaseBdev2", 00:18:06.446 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:06.446 "is_configured": true, 00:18:06.446 "data_offset": 0, 00:18:06.446 "data_size": 65536 00:18:06.446 } 00:18:06.446 ] 00:18:06.446 }' 00:18:06.446 14:19:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.447 14:19:58 -- common/autotest_common.sh@10 -- # set +x 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.014 14:19:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.273 14:19:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:07.273 "name": "raid_bdev1", 00:18:07.273 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:07.273 "strip_size_kb": 0, 00:18:07.273 "state": "online", 00:18:07.273 "raid_level": "raid1", 00:18:07.273 "superblock": false, 00:18:07.273 "num_base_bdevs": 2, 00:18:07.273 "num_base_bdevs_discovered": 1, 00:18:07.273 "num_base_bdevs_operational": 1, 00:18:07.273 "base_bdevs_list": [ 00:18:07.273 { 00:18:07.273 "name": null, 00:18:07.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.273 "is_configured": false, 00:18:07.273 "data_offset": 0, 00:18:07.273 "data_size": 65536 00:18:07.273 }, 00:18:07.273 { 00:18:07.273 "name": "BaseBdev2", 00:18:07.273 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:07.273 "is_configured": true, 00:18:07.273 "data_offset": 0, 00:18:07.273 "data_size": 65536 00:18:07.273 } 00:18:07.273 ] 00:18:07.273 }' 00:18:07.273 14:19:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:07.531 14:19:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:07.531 14:19:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:07.531 14:19:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:07.531 14:19:59 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.790 [2024-11-18 14:19:59.683452] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:07.790 [2024-11-18 14:19:59.683487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.790 [2024-11-18 14:19:59.685641] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:18:07.790 [2024-11-18 14:19:59.687390] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.790 14:19:59 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.725 14:20:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.984 14:20:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:08.984 "name": "raid_bdev1", 00:18:08.984 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:08.984 "strip_size_kb": 0, 00:18:08.984 "state": "online", 00:18:08.984 "raid_level": "raid1", 00:18:08.984 "superblock": false, 00:18:08.984 "num_base_bdevs": 2, 00:18:08.984 "num_base_bdevs_discovered": 2, 00:18:08.984 "num_base_bdevs_operational": 2, 00:18:08.984 "process": { 00:18:08.984 "type": "rebuild", 00:18:08.984 "target": "spare", 00:18:08.984 "progress": { 00:18:08.984 "blocks": 24576, 00:18:08.984 "percent": 37 00:18:08.984 } 00:18:08.984 }, 00:18:08.984 "base_bdevs_list": [ 00:18:08.984 { 00:18:08.984 "name": "spare", 00:18:08.984 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:08.984 "is_configured": true, 00:18:08.984 "data_offset": 0, 00:18:08.984 "data_size": 65536 00:18:08.984 }, 00:18:08.984 { 00:18:08.984 "name": "BaseBdev2", 00:18:08.984 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:08.984 "is_configured": true, 00:18:08.984 "data_offset": 0, 00:18:08.984 "data_size": 65536 00:18:08.984 } 00:18:08.984 ] 00:18:08.984 }' 00:18:08.984 14:20:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:08.984 14:20:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.984 14:20:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@657 -- # local timeout=357 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.984 14:20:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.243 14:20:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:09.243 "name": "raid_bdev1", 00:18:09.243 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:09.243 "strip_size_kb": 0, 00:18:09.243 "state": "online", 00:18:09.243 "raid_level": "raid1", 00:18:09.243 "superblock": false, 00:18:09.243 "num_base_bdevs": 2, 00:18:09.243 "num_base_bdevs_discovered": 2, 00:18:09.243 "num_base_bdevs_operational": 2, 00:18:09.243 "process": { 00:18:09.243 "type": "rebuild", 00:18:09.243 "target": "spare", 00:18:09.243 "progress": { 00:18:09.243 "blocks": 30720, 00:18:09.243 "percent": 46 00:18:09.243 } 00:18:09.243 }, 00:18:09.243 "base_bdevs_list": [ 00:18:09.243 { 00:18:09.243 "name": "spare", 00:18:09.243 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:09.243 "is_configured": true, 00:18:09.243 "data_offset": 0, 00:18:09.243 "data_size": 65536 00:18:09.243 }, 00:18:09.243 { 00:18:09.243 "name": "BaseBdev2", 00:18:09.243 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:09.243 "is_configured": true, 00:18:09.243 "data_offset": 0, 00:18:09.243 "data_size": 65536 00:18:09.243 } 00:18:09.243 ] 00:18:09.243 }' 00:18:09.243 14:20:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:09.501 14:20:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.501 14:20:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:09.501 14:20:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.501 14:20:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.437 14:20:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.696 14:20:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:10.696 "name": "raid_bdev1", 00:18:10.696 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:10.696 "strip_size_kb": 0, 00:18:10.696 "state": "online", 00:18:10.696 "raid_level": "raid1", 00:18:10.696 "superblock": false, 00:18:10.696 "num_base_bdevs": 2, 00:18:10.696 "num_base_bdevs_discovered": 2, 00:18:10.696 "num_base_bdevs_operational": 2, 00:18:10.696 "process": { 00:18:10.696 "type": "rebuild", 00:18:10.696 "target": "spare", 00:18:10.696 "progress": { 00:18:10.696 "blocks": 59392, 00:18:10.696 "percent": 90 00:18:10.696 } 00:18:10.696 }, 00:18:10.696 "base_bdevs_list": [ 00:18:10.696 { 00:18:10.696 "name": "spare", 00:18:10.696 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:10.696 "is_configured": true, 00:18:10.696 "data_offset": 0, 00:18:10.696 "data_size": 65536 00:18:10.696 }, 00:18:10.696 { 00:18:10.696 "name": "BaseBdev2", 00:18:10.696 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:10.696 "is_configured": true, 00:18:10.696 "data_offset": 0, 00:18:10.696 "data_size": 65536 00:18:10.696 } 00:18:10.696 ] 00:18:10.696 }' 00:18:10.696 14:20:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:10.696 14:20:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.696 14:20:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:10.696 14:20:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.696 14:20:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:10.955 [2024-11-18 14:20:02.903667] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:10.955 [2024-11-18 14:20:02.903750] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:10.955 [2024-11-18 14:20:02.903827] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.891 14:20:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.149 14:20:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:12.149 "name": "raid_bdev1", 00:18:12.149 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:12.149 "strip_size_kb": 0, 00:18:12.149 "state": "online", 00:18:12.149 "raid_level": "raid1", 00:18:12.149 "superblock": false, 00:18:12.149 "num_base_bdevs": 2, 00:18:12.149 "num_base_bdevs_discovered": 2, 00:18:12.150 "num_base_bdevs_operational": 2, 00:18:12.150 "base_bdevs_list": [ 00:18:12.150 { 00:18:12.150 "name": "spare", 00:18:12.150 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:12.150 "is_configured": true, 00:18:12.150 "data_offset": 0, 00:18:12.150 "data_size": 65536 00:18:12.150 }, 00:18:12.150 { 00:18:12.150 "name": "BaseBdev2", 00:18:12.150 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:12.150 "is_configured": true, 00:18:12.150 "data_offset": 0, 00:18:12.150 "data_size": 65536 00:18:12.150 } 00:18:12.150 ] 00:18:12.150 }' 00:18:12.150 14:20:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@660 -- # break 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.150 14:20:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.408 14:20:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:12.408 "name": "raid_bdev1", 00:18:12.408 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:12.408 "strip_size_kb": 0, 00:18:12.408 "state": "online", 00:18:12.408 "raid_level": "raid1", 00:18:12.408 "superblock": false, 00:18:12.408 "num_base_bdevs": 2, 00:18:12.408 "num_base_bdevs_discovered": 2, 00:18:12.408 "num_base_bdevs_operational": 2, 00:18:12.408 "base_bdevs_list": [ 00:18:12.408 { 00:18:12.408 "name": "spare", 00:18:12.408 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:12.408 "is_configured": true, 00:18:12.408 "data_offset": 0, 00:18:12.408 "data_size": 65536 00:18:12.408 }, 00:18:12.408 { 00:18:12.408 "name": "BaseBdev2", 00:18:12.408 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:12.408 "is_configured": true, 00:18:12.408 "data_offset": 0, 00:18:12.408 "data_size": 65536 00:18:12.408 } 00:18:12.408 ] 00:18:12.408 }' 00:18:12.408 14:20:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.409 14:20:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.667 14:20:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.667 "name": "raid_bdev1", 00:18:12.667 "uuid": "eb3bac08-e8a1-42be-9b18-f38b33fa9063", 00:18:12.667 "strip_size_kb": 0, 00:18:12.667 "state": "online", 00:18:12.667 "raid_level": "raid1", 00:18:12.667 "superblock": false, 00:18:12.667 "num_base_bdevs": 2, 00:18:12.667 "num_base_bdevs_discovered": 2, 00:18:12.667 "num_base_bdevs_operational": 2, 00:18:12.667 "base_bdevs_list": [ 00:18:12.667 { 00:18:12.668 "name": "spare", 00:18:12.668 "uuid": "9970af1b-988a-57a7-80bf-794ddb4c7d71", 00:18:12.668 "is_configured": true, 00:18:12.668 "data_offset": 0, 00:18:12.668 "data_size": 65536 00:18:12.668 }, 00:18:12.668 { 00:18:12.668 "name": "BaseBdev2", 00:18:12.668 "uuid": "efd69120-0b26-4536-a7ca-67d8b9c8ee04", 00:18:12.668 "is_configured": true, 00:18:12.668 "data_offset": 0, 00:18:12.668 "data_size": 65536 00:18:12.668 } 00:18:12.668 ] 00:18:12.668 }' 00:18:12.668 14:20:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.668 14:20:04 -- common/autotest_common.sh@10 -- # set +x 00:18:13.251 14:20:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:13.539 [2024-11-18 14:20:05.528010] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.539 [2024-11-18 14:20:05.528034] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.539 [2024-11-18 14:20:05.528139] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.539 [2024-11-18 14:20:05.528209] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.539 [2024-11-18 14:20:05.528222] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:18:13.539 14:20:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.539 14:20:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:13.808 14:20:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:13.808 14:20:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:18:13.808 14:20:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@12 -- # local i 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:13.808 14:20:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:14.068 /dev/nbd0 00:18:14.068 14:20:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:14.068 14:20:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:14.068 14:20:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:14.068 14:20:06 -- common/autotest_common.sh@867 -- # local i 00:18:14.068 14:20:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:14.068 14:20:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:14.068 14:20:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:14.068 14:20:06 -- common/autotest_common.sh@871 -- # break 00:18:14.068 14:20:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:14.068 14:20:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:14.068 14:20:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.068 1+0 records in 00:18:14.068 1+0 records out 00:18:14.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052708 s, 7.8 MB/s 00:18:14.068 14:20:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.068 14:20:06 -- common/autotest_common.sh@884 -- # size=4096 00:18:14.068 14:20:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.068 14:20:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:14.068 14:20:06 -- common/autotest_common.sh@887 -- # return 0 00:18:14.068 14:20:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.068 14:20:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.068 14:20:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:14.327 /dev/nbd1 00:18:14.327 14:20:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:14.327 14:20:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:14.327 14:20:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:14.327 14:20:06 -- common/autotest_common.sh@867 -- # local i 00:18:14.327 14:20:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:14.327 14:20:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:14.327 14:20:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:14.327 14:20:06 -- common/autotest_common.sh@871 -- # break 00:18:14.327 14:20:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:14.327 14:20:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:14.327 14:20:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.327 1+0 records in 00:18:14.327 1+0 records out 00:18:14.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540724 s, 7.6 MB/s 00:18:14.327 14:20:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.327 14:20:06 -- common/autotest_common.sh@884 -- # size=4096 00:18:14.327 14:20:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.327 14:20:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:14.327 14:20:06 -- common/autotest_common.sh@887 -- # return 0 00:18:14.327 14:20:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.327 14:20:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.327 14:20:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:14.585 14:20:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:14.585 14:20:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:14.585 14:20:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:14.585 14:20:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.585 14:20:06 -- bdev/nbd_common.sh@51 -- # local i 00:18:14.585 14:20:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.585 14:20:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:14.844 14:20:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.844 14:20:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.844 14:20:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@41 -- # break 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.845 14:20:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@41 -- # break 00:18:15.103 14:20:06 -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.103 14:20:06 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:18:15.103 14:20:06 -- bdev/bdev_raid.sh@709 -- # killprocess 132402 00:18:15.103 14:20:06 -- common/autotest_common.sh@936 -- # '[' -z 132402 ']' 00:18:15.104 14:20:06 -- common/autotest_common.sh@940 -- # kill -0 132402 00:18:15.104 14:20:06 -- common/autotest_common.sh@941 -- # uname 00:18:15.104 14:20:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.104 14:20:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132402 00:18:15.104 14:20:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:15.104 14:20:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:15.104 14:20:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132402' 00:18:15.104 killing process with pid 132402 00:18:15.104 14:20:06 -- common/autotest_common.sh@955 -- # kill 132402 00:18:15.104 Received shutdown signal, test time was about 60.000000 seconds 00:18:15.104 00:18:15.104 Latency(us) 00:18:15.104 [2024-11-18T14:20:07.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.104 [2024-11-18T14:20:07.178Z] =================================================================================================================== 00:18:15.104 [2024-11-18T14:20:07.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.104 [2024-11-18 14:20:06.988958] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.104 14:20:06 -- common/autotest_common.sh@960 -- # wait 132402 00:18:15.104 [2024-11-18 14:20:07.018122] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:15.363 00:18:15.363 real 0m20.287s 00:18:15.363 user 0m28.493s 00:18:15.363 sys 0m3.516s 00:18:15.363 14:20:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:15.363 14:20:07 -- common/autotest_common.sh@10 -- # set +x 00:18:15.363 ************************************ 00:18:15.363 END TEST raid_rebuild_test 00:18:15.363 ************************************ 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:18:15.363 14:20:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:15.363 14:20:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.363 14:20:07 -- common/autotest_common.sh@10 -- # set +x 00:18:15.363 ************************************ 00:18:15.363 START TEST raid_rebuild_test_sb 00:18:15.363 ************************************ 00:18:15.363 14:20:07 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@544 -- # raid_pid=132929 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132929 /var/tmp/spdk-raid.sock 00:18:15.363 14:20:07 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.363 14:20:07 -- common/autotest_common.sh@829 -- # '[' -z 132929 ']' 00:18:15.363 14:20:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:15.363 14:20:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:15.363 14:20:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:15.363 14:20:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.363 14:20:07 -- common/autotest_common.sh@10 -- # set +x 00:18:15.622 [2024-11-18 14:20:07.441866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:15.622 [2024-11-18 14:20:07.442083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132929 ] 00:18:15.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.622 Zero copy mechanism will not be used. 00:18:15.622 [2024-11-18 14:20:07.588550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.622 [2024-11-18 14:20:07.663875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.881 [2024-11-18 14:20:07.734378] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.455 14:20:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.455 14:20:08 -- common/autotest_common.sh@862 -- # return 0 00:18:16.455 14:20:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:16.455 14:20:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:16.455 14:20:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:16.712 BaseBdev1_malloc 00:18:16.712 14:20:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:16.712 [2024-11-18 14:20:08.759332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:16.712 [2024-11-18 14:20:08.759431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.712 [2024-11-18 14:20:08.759476] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:16.712 [2024-11-18 14:20:08.759532] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.712 [2024-11-18 14:20:08.761897] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.712 [2024-11-18 14:20:08.761954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.712 BaseBdev1 00:18:16.712 14:20:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:16.712 14:20:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:16.712 14:20:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:16.970 BaseBdev2_malloc 00:18:16.970 14:20:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:17.228 [2024-11-18 14:20:09.112886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:17.228 [2024-11-18 14:20:09.112953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.228 [2024-11-18 14:20:09.112989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:17.228 [2024-11-18 14:20:09.113033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.228 [2024-11-18 14:20:09.115266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.228 [2024-11-18 14:20:09.115315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:17.228 BaseBdev2 00:18:17.228 14:20:09 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:17.485 spare_malloc 00:18:17.485 14:20:09 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:17.485 spare_delay 00:18:17.485 14:20:09 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:17.743 [2024-11-18 14:20:09.661784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.743 [2024-11-18 14:20:09.661849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.743 [2024-11-18 14:20:09.661884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:17.743 [2024-11-18 14:20:09.661928] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.743 [2024-11-18 14:20:09.664243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.743 [2024-11-18 14:20:09.664298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.743 spare 00:18:17.743 14:20:09 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:18.002 [2024-11-18 14:20:09.893887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.002 [2024-11-18 14:20:09.895920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.002 [2024-11-18 14:20:09.896133] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:18:18.002 [2024-11-18 14:20:09.896148] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:18.002 [2024-11-18 14:20:09.896290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:18:18.002 [2024-11-18 14:20:09.896680] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:18:18.002 [2024-11-18 14:20:09.896701] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:18:18.002 [2024-11-18 14:20:09.896828] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.002 14:20:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.261 14:20:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.261 "name": "raid_bdev1", 00:18:18.261 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:18.261 "strip_size_kb": 0, 00:18:18.261 "state": "online", 00:18:18.261 "raid_level": "raid1", 00:18:18.261 "superblock": true, 00:18:18.261 "num_base_bdevs": 2, 00:18:18.261 "num_base_bdevs_discovered": 2, 00:18:18.261 "num_base_bdevs_operational": 2, 00:18:18.261 "base_bdevs_list": [ 00:18:18.261 { 00:18:18.261 "name": "BaseBdev1", 00:18:18.261 "uuid": "9db52b58-9f73-51a6-bfd8-672b685005dd", 00:18:18.261 "is_configured": true, 00:18:18.261 "data_offset": 2048, 00:18:18.261 "data_size": 63488 00:18:18.261 }, 00:18:18.261 { 00:18:18.261 "name": "BaseBdev2", 00:18:18.261 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:18.261 "is_configured": true, 00:18:18.261 "data_offset": 2048, 00:18:18.261 "data_size": 63488 00:18:18.261 } 00:18:18.261 ] 00:18:18.261 }' 00:18:18.261 14:20:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.261 14:20:10 -- common/autotest_common.sh@10 -- # set +x 00:18:18.828 14:20:10 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:18.828 14:20:10 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:19.086 [2024-11-18 14:20:10.918188] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.087 14:20:10 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:18:19.087 14:20:10 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:19.087 14:20:10 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.345 14:20:11 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:18:19.345 14:20:11 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:18:19.345 14:20:11 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:18:19.345 14:20:11 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@12 -- # local i 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.345 14:20:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:19.604 [2024-11-18 14:20:11.435482] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:19.604 /dev/nbd0 00:18:19.604 14:20:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:19.604 14:20:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:19.604 14:20:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:19.604 14:20:11 -- common/autotest_common.sh@867 -- # local i 00:18:19.604 14:20:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:19.604 14:20:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:19.604 14:20:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:19.604 14:20:11 -- common/autotest_common.sh@871 -- # break 00:18:19.604 14:20:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:19.604 14:20:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:19.604 14:20:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.604 1+0 records in 00:18:19.604 1+0 records out 00:18:19.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333576 s, 12.3 MB/s 00:18:19.604 14:20:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.604 14:20:11 -- common/autotest_common.sh@884 -- # size=4096 00:18:19.604 14:20:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.604 14:20:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:19.604 14:20:11 -- common/autotest_common.sh@887 -- # return 0 00:18:19.604 14:20:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.604 14:20:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:19.604 14:20:11 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:18:19.604 14:20:11 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:18:19.605 14:20:11 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:23.795 63488+0 records in 00:18:23.795 63488+0 records out 00:18:23.795 32505856 bytes (33 MB, 31 MiB) copied, 4.29733 s, 7.6 MB/s 00:18:23.795 14:20:15 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:23.795 14:20:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:23.795 14:20:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:23.795 14:20:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:23.795 14:20:15 -- bdev/nbd_common.sh@51 -- # local i 00:18:23.795 14:20:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:23.795 14:20:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@41 -- # break 00:18:24.053 14:20:16 -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.053 14:20:16 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:24.053 [2024-11-18 14:20:16.055091] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.312 [2024-11-18 14:20:16.222691] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.312 14:20:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.572 14:20:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.572 "name": "raid_bdev1", 00:18:24.572 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:24.572 "strip_size_kb": 0, 00:18:24.572 "state": "online", 00:18:24.572 "raid_level": "raid1", 00:18:24.572 "superblock": true, 00:18:24.572 "num_base_bdevs": 2, 00:18:24.572 "num_base_bdevs_discovered": 1, 00:18:24.572 "num_base_bdevs_operational": 1, 00:18:24.572 "base_bdevs_list": [ 00:18:24.572 { 00:18:24.572 "name": null, 00:18:24.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.572 "is_configured": false, 00:18:24.572 "data_offset": 2048, 00:18:24.572 "data_size": 63488 00:18:24.572 }, 00:18:24.572 { 00:18:24.572 "name": "BaseBdev2", 00:18:24.572 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:24.572 "is_configured": true, 00:18:24.572 "data_offset": 2048, 00:18:24.572 "data_size": 63488 00:18:24.572 } 00:18:24.572 ] 00:18:24.572 }' 00:18:24.572 14:20:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.572 14:20:16 -- common/autotest_common.sh@10 -- # set +x 00:18:25.140 14:20:17 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:25.398 [2024-11-18 14:20:17.287185] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:25.398 [2024-11-18 14:20:17.287242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.398 [2024-11-18 14:20:17.294022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:18:25.398 [2024-11-18 14:20:17.296173] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.398 14:20:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.335 14:20:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.594 14:20:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:26.594 "name": "raid_bdev1", 00:18:26.594 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:26.594 "strip_size_kb": 0, 00:18:26.594 "state": "online", 00:18:26.594 "raid_level": "raid1", 00:18:26.594 "superblock": true, 00:18:26.594 "num_base_bdevs": 2, 00:18:26.594 "num_base_bdevs_discovered": 2, 00:18:26.594 "num_base_bdevs_operational": 2, 00:18:26.594 "process": { 00:18:26.594 "type": "rebuild", 00:18:26.594 "target": "spare", 00:18:26.594 "progress": { 00:18:26.594 "blocks": 24576, 00:18:26.594 "percent": 38 00:18:26.594 } 00:18:26.594 }, 00:18:26.594 "base_bdevs_list": [ 00:18:26.594 { 00:18:26.594 "name": "spare", 00:18:26.594 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:26.594 "is_configured": true, 00:18:26.594 "data_offset": 2048, 00:18:26.594 "data_size": 63488 00:18:26.594 }, 00:18:26.594 { 00:18:26.594 "name": "BaseBdev2", 00:18:26.594 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:26.594 "is_configured": true, 00:18:26.594 "data_offset": 2048, 00:18:26.594 "data_size": 63488 00:18:26.594 } 00:18:26.594 ] 00:18:26.594 }' 00:18:26.594 14:20:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:26.594 14:20:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.594 14:20:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:26.594 14:20:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.594 14:20:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:26.853 [2024-11-18 14:20:18.870441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.853 [2024-11-18 14:20:18.906298] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:26.853 [2024-11-18 14:20:18.906384] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.112 14:20:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.112 14:20:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.112 "name": "raid_bdev1", 00:18:27.112 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:27.112 "strip_size_kb": 0, 00:18:27.112 "state": "online", 00:18:27.112 "raid_level": "raid1", 00:18:27.112 "superblock": true, 00:18:27.112 "num_base_bdevs": 2, 00:18:27.112 "num_base_bdevs_discovered": 1, 00:18:27.112 "num_base_bdevs_operational": 1, 00:18:27.112 "base_bdevs_list": [ 00:18:27.112 { 00:18:27.112 "name": null, 00:18:27.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.112 "is_configured": false, 00:18:27.112 "data_offset": 2048, 00:18:27.112 "data_size": 63488 00:18:27.112 }, 00:18:27.112 { 00:18:27.112 "name": "BaseBdev2", 00:18:27.112 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:27.112 "is_configured": true, 00:18:27.112 "data_offset": 2048, 00:18:27.112 "data_size": 63488 00:18:27.112 } 00:18:27.112 ] 00:18:27.112 }' 00:18:27.112 14:20:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.112 14:20:19 -- common/autotest_common.sh@10 -- # set +x 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:28.047 "name": "raid_bdev1", 00:18:28.047 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:28.047 "strip_size_kb": 0, 00:18:28.047 "state": "online", 00:18:28.047 "raid_level": "raid1", 00:18:28.047 "superblock": true, 00:18:28.047 "num_base_bdevs": 2, 00:18:28.047 "num_base_bdevs_discovered": 1, 00:18:28.047 "num_base_bdevs_operational": 1, 00:18:28.047 "base_bdevs_list": [ 00:18:28.047 { 00:18:28.047 "name": null, 00:18:28.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.047 "is_configured": false, 00:18:28.047 "data_offset": 2048, 00:18:28.047 "data_size": 63488 00:18:28.047 }, 00:18:28.047 { 00:18:28.047 "name": "BaseBdev2", 00:18:28.047 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:28.047 "is_configured": true, 00:18:28.047 "data_offset": 2048, 00:18:28.047 "data_size": 63488 00:18:28.047 } 00:18:28.047 ] 00:18:28.047 }' 00:18:28.047 14:20:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:28.047 14:20:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:28.047 14:20:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:28.047 14:20:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:28.047 14:20:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:28.306 [2024-11-18 14:20:20.251571] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:28.306 [2024-11-18 14:20:20.251606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:28.306 [2024-11-18 14:20:20.253511] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:18:28.306 [2024-11-18 14:20:20.255476] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.306 14:20:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.242 14:20:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.500 14:20:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:29.500 "name": "raid_bdev1", 00:18:29.500 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:29.500 "strip_size_kb": 0, 00:18:29.500 "state": "online", 00:18:29.500 "raid_level": "raid1", 00:18:29.500 "superblock": true, 00:18:29.500 "num_base_bdevs": 2, 00:18:29.500 "num_base_bdevs_discovered": 2, 00:18:29.500 "num_base_bdevs_operational": 2, 00:18:29.500 "process": { 00:18:29.500 "type": "rebuild", 00:18:29.500 "target": "spare", 00:18:29.500 "progress": { 00:18:29.500 "blocks": 24576, 00:18:29.500 "percent": 38 00:18:29.500 } 00:18:29.500 }, 00:18:29.500 "base_bdevs_list": [ 00:18:29.500 { 00:18:29.500 "name": "spare", 00:18:29.500 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:29.500 "is_configured": true, 00:18:29.500 "data_offset": 2048, 00:18:29.500 "data_size": 63488 00:18:29.500 }, 00:18:29.500 { 00:18:29.500 "name": "BaseBdev2", 00:18:29.500 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:29.500 "is_configured": true, 00:18:29.500 "data_offset": 2048, 00:18:29.500 "data_size": 63488 00:18:29.500 } 00:18:29.500 ] 00:18:29.500 }' 00:18:29.500 14:20:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:29.500 14:20:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.500 14:20:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:18:29.759 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@657 -- # local timeout=377 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.759 14:20:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.018 14:20:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:30.018 "name": "raid_bdev1", 00:18:30.018 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:30.018 "strip_size_kb": 0, 00:18:30.018 "state": "online", 00:18:30.018 "raid_level": "raid1", 00:18:30.018 "superblock": true, 00:18:30.018 "num_base_bdevs": 2, 00:18:30.018 "num_base_bdevs_discovered": 2, 00:18:30.018 "num_base_bdevs_operational": 2, 00:18:30.018 "process": { 00:18:30.018 "type": "rebuild", 00:18:30.018 "target": "spare", 00:18:30.018 "progress": { 00:18:30.018 "blocks": 30720, 00:18:30.018 "percent": 48 00:18:30.018 } 00:18:30.018 }, 00:18:30.018 "base_bdevs_list": [ 00:18:30.018 { 00:18:30.018 "name": "spare", 00:18:30.018 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:30.018 "is_configured": true, 00:18:30.018 "data_offset": 2048, 00:18:30.018 "data_size": 63488 00:18:30.018 }, 00:18:30.018 { 00:18:30.018 "name": "BaseBdev2", 00:18:30.018 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:30.018 "is_configured": true, 00:18:30.018 "data_offset": 2048, 00:18:30.018 "data_size": 63488 00:18:30.018 } 00:18:30.018 ] 00:18:30.018 }' 00:18:30.018 14:20:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:30.018 14:20:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.018 14:20:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:30.018 14:20:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.018 14:20:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.954 14:20:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.212 14:20:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:31.212 "name": "raid_bdev1", 00:18:31.212 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:31.213 "strip_size_kb": 0, 00:18:31.213 "state": "online", 00:18:31.213 "raid_level": "raid1", 00:18:31.213 "superblock": true, 00:18:31.213 "num_base_bdevs": 2, 00:18:31.213 "num_base_bdevs_discovered": 2, 00:18:31.213 "num_base_bdevs_operational": 2, 00:18:31.213 "process": { 00:18:31.213 "type": "rebuild", 00:18:31.213 "target": "spare", 00:18:31.213 "progress": { 00:18:31.213 "blocks": 59392, 00:18:31.213 "percent": 93 00:18:31.213 } 00:18:31.213 }, 00:18:31.213 "base_bdevs_list": [ 00:18:31.213 { 00:18:31.213 "name": "spare", 00:18:31.213 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:31.213 "is_configured": true, 00:18:31.213 "data_offset": 2048, 00:18:31.213 "data_size": 63488 00:18:31.213 }, 00:18:31.213 { 00:18:31.213 "name": "BaseBdev2", 00:18:31.213 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:31.213 "is_configured": true, 00:18:31.213 "data_offset": 2048, 00:18:31.213 "data_size": 63488 00:18:31.213 } 00:18:31.213 ] 00:18:31.213 }' 00:18:31.213 14:20:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:31.213 14:20:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.213 14:20:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:31.471 14:20:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.471 14:20:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:31.471 [2024-11-18 14:20:23.371070] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:31.471 [2024-11-18 14:20:23.371162] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:31.471 [2024-11-18 14:20:23.371301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.408 14:20:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:32.667 "name": "raid_bdev1", 00:18:32.667 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:32.667 "strip_size_kb": 0, 00:18:32.667 "state": "online", 00:18:32.667 "raid_level": "raid1", 00:18:32.667 "superblock": true, 00:18:32.667 "num_base_bdevs": 2, 00:18:32.667 "num_base_bdevs_discovered": 2, 00:18:32.667 "num_base_bdevs_operational": 2, 00:18:32.667 "base_bdevs_list": [ 00:18:32.667 { 00:18:32.667 "name": "spare", 00:18:32.667 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:32.667 "is_configured": true, 00:18:32.667 "data_offset": 2048, 00:18:32.667 "data_size": 63488 00:18:32.667 }, 00:18:32.667 { 00:18:32.667 "name": "BaseBdev2", 00:18:32.667 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:32.667 "is_configured": true, 00:18:32.667 "data_offset": 2048, 00:18:32.667 "data_size": 63488 00:18:32.667 } 00:18:32.667 ] 00:18:32.667 }' 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@660 -- # break 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.667 14:20:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:32.926 "name": "raid_bdev1", 00:18:32.926 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:32.926 "strip_size_kb": 0, 00:18:32.926 "state": "online", 00:18:32.926 "raid_level": "raid1", 00:18:32.926 "superblock": true, 00:18:32.926 "num_base_bdevs": 2, 00:18:32.926 "num_base_bdevs_discovered": 2, 00:18:32.926 "num_base_bdevs_operational": 2, 00:18:32.926 "base_bdevs_list": [ 00:18:32.926 { 00:18:32.926 "name": "spare", 00:18:32.926 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:32.926 "is_configured": true, 00:18:32.926 "data_offset": 2048, 00:18:32.926 "data_size": 63488 00:18:32.926 }, 00:18:32.926 { 00:18:32.926 "name": "BaseBdev2", 00:18:32.926 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:32.926 "is_configured": true, 00:18:32.926 "data_offset": 2048, 00:18:32.926 "data_size": 63488 00:18:32.926 } 00:18:32.926 ] 00:18:32.926 }' 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.926 14:20:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.184 14:20:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.184 "name": "raid_bdev1", 00:18:33.184 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:33.184 "strip_size_kb": 0, 00:18:33.184 "state": "online", 00:18:33.184 "raid_level": "raid1", 00:18:33.184 "superblock": true, 00:18:33.184 "num_base_bdevs": 2, 00:18:33.184 "num_base_bdevs_discovered": 2, 00:18:33.184 "num_base_bdevs_operational": 2, 00:18:33.184 "base_bdevs_list": [ 00:18:33.184 { 00:18:33.184 "name": "spare", 00:18:33.184 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:33.184 "is_configured": true, 00:18:33.184 "data_offset": 2048, 00:18:33.184 "data_size": 63488 00:18:33.184 }, 00:18:33.184 { 00:18:33.184 "name": "BaseBdev2", 00:18:33.184 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:33.184 "is_configured": true, 00:18:33.184 "data_offset": 2048, 00:18:33.184 "data_size": 63488 00:18:33.184 } 00:18:33.184 ] 00:18:33.184 }' 00:18:33.184 14:20:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.184 14:20:25 -- common/autotest_common.sh@10 -- # set +x 00:18:34.120 14:20:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:34.120 [2024-11-18 14:20:26.087555] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.120 [2024-11-18 14:20:26.087580] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.120 [2024-11-18 14:20:26.087686] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.120 [2024-11-18 14:20:26.087759] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.120 [2024-11-18 14:20:26.087772] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:18:34.120 14:20:26 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.120 14:20:26 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:34.378 14:20:26 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:34.378 14:20:26 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:18:34.378 14:20:26 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:34.378 14:20:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:34.378 14:20:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:34.378 14:20:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:34.378 14:20:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:34.378 14:20:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:34.378 14:20:26 -- bdev/nbd_common.sh@12 -- # local i 00:18:34.379 14:20:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:34.379 14:20:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.379 14:20:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:34.637 /dev/nbd0 00:18:34.637 14:20:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:34.637 14:20:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:34.637 14:20:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:34.637 14:20:26 -- common/autotest_common.sh@867 -- # local i 00:18:34.637 14:20:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:34.637 14:20:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:34.637 14:20:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:34.637 14:20:26 -- common/autotest_common.sh@871 -- # break 00:18:34.637 14:20:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:34.637 14:20:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:34.637 14:20:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.637 1+0 records in 00:18:34.637 1+0 records out 00:18:34.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031083 s, 13.2 MB/s 00:18:34.637 14:20:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.637 14:20:26 -- common/autotest_common.sh@884 -- # size=4096 00:18:34.637 14:20:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.637 14:20:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:34.637 14:20:26 -- common/autotest_common.sh@887 -- # return 0 00:18:34.637 14:20:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.637 14:20:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.637 14:20:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:18:34.896 /dev/nbd1 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:34.896 14:20:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:34.896 14:20:26 -- common/autotest_common.sh@867 -- # local i 00:18:34.896 14:20:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:34.896 14:20:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:34.896 14:20:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:34.896 14:20:26 -- common/autotest_common.sh@871 -- # break 00:18:34.896 14:20:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:34.896 14:20:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:34.896 14:20:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.896 1+0 records in 00:18:34.896 1+0 records out 00:18:34.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316306 s, 12.9 MB/s 00:18:34.896 14:20:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.896 14:20:26 -- common/autotest_common.sh@884 -- # size=4096 00:18:34.896 14:20:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.896 14:20:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:34.896 14:20:26 -- common/autotest_common.sh@887 -- # return 0 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:34.896 14:20:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:34.896 14:20:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@51 -- # local i 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.896 14:20:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@41 -- # break 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.154 14:20:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@41 -- # break 00:18:35.412 14:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.412 14:20:27 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:18:35.412 14:20:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:18:35.412 14:20:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:18:35.412 14:20:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:35.671 14:20:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:35.930 [2024-11-18 14:20:27.851857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:35.930 [2024-11-18 14:20:27.851949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.930 [2024-11-18 14:20:27.851988] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:35.930 [2024-11-18 14:20:27.852019] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.930 [2024-11-18 14:20:27.854986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.930 [2024-11-18 14:20:27.855060] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:35.930 [2024-11-18 14:20:27.855141] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:35.930 [2024-11-18 14:20:27.855219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.930 BaseBdev1 00:18:35.930 14:20:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:18:35.930 14:20:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:18:35.930 14:20:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:18:36.188 14:20:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:36.188 [2024-11-18 14:20:28.223902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:36.188 [2024-11-18 14:20:28.223962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.189 [2024-11-18 14:20:28.224013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:36.189 [2024-11-18 14:20:28.224041] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.189 [2024-11-18 14:20:28.224362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.189 [2024-11-18 14:20:28.224428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:36.189 [2024-11-18 14:20:28.224494] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:18:36.189 [2024-11-18 14:20:28.224508] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:18:36.189 [2024-11-18 14:20:28.224515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.189 [2024-11-18 14:20:28.224545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:18:36.189 [2024-11-18 14:20:28.224583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.189 BaseBdev2 00:18:36.189 14:20:28 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:36.447 14:20:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:36.705 [2024-11-18 14:20:28.667969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.705 [2024-11-18 14:20:28.668038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.705 [2024-11-18 14:20:28.668078] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:36.705 [2024-11-18 14:20:28.668103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.705 [2024-11-18 14:20:28.668470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.705 [2024-11-18 14:20:28.668523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.705 [2024-11-18 14:20:28.668597] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:18:36.705 [2024-11-18 14:20:28.668635] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:36.705 spare 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.705 14:20:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.705 [2024-11-18 14:20:28.768735] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:18:36.705 [2024-11-18 14:20:28.768757] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:36.705 [2024-11-18 14:20:28.768891] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:18:36.705 [2024-11-18 14:20:28.769277] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:18:36.705 [2024-11-18 14:20:28.769299] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:18:36.705 [2024-11-18 14:20:28.769410] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.963 14:20:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.963 "name": "raid_bdev1", 00:18:36.963 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:36.963 "strip_size_kb": 0, 00:18:36.963 "state": "online", 00:18:36.963 "raid_level": "raid1", 00:18:36.963 "superblock": true, 00:18:36.963 "num_base_bdevs": 2, 00:18:36.963 "num_base_bdevs_discovered": 2, 00:18:36.963 "num_base_bdevs_operational": 2, 00:18:36.963 "base_bdevs_list": [ 00:18:36.963 { 00:18:36.963 "name": "spare", 00:18:36.963 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:36.963 "is_configured": true, 00:18:36.963 "data_offset": 2048, 00:18:36.963 "data_size": 63488 00:18:36.963 }, 00:18:36.963 { 00:18:36.963 "name": "BaseBdev2", 00:18:36.963 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:36.963 "is_configured": true, 00:18:36.963 "data_offset": 2048, 00:18:36.963 "data_size": 63488 00:18:36.963 } 00:18:36.963 ] 00:18:36.963 }' 00:18:36.963 14:20:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.963 14:20:28 -- common/autotest_common.sh@10 -- # set +x 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.530 14:20:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:37.788 "name": "raid_bdev1", 00:18:37.788 "uuid": "dd5d0d89-3a37-4080-9078-101c28ac1aa4", 00:18:37.788 "strip_size_kb": 0, 00:18:37.788 "state": "online", 00:18:37.788 "raid_level": "raid1", 00:18:37.788 "superblock": true, 00:18:37.788 "num_base_bdevs": 2, 00:18:37.788 "num_base_bdevs_discovered": 2, 00:18:37.788 "num_base_bdevs_operational": 2, 00:18:37.788 "base_bdevs_list": [ 00:18:37.788 { 00:18:37.788 "name": "spare", 00:18:37.788 "uuid": "b39acafb-c1a6-5a7c-986f-42e4c019e25f", 00:18:37.788 "is_configured": true, 00:18:37.788 "data_offset": 2048, 00:18:37.788 "data_size": 63488 00:18:37.788 }, 00:18:37.788 { 00:18:37.788 "name": "BaseBdev2", 00:18:37.788 "uuid": "7dc37050-98da-5e0f-9659-b389eb00ea00", 00:18:37.788 "is_configured": true, 00:18:37.788 "data_offset": 2048, 00:18:37.788 "data_size": 63488 00:18:37.788 } 00:18:37.788 ] 00:18:37.788 }' 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.788 14:20:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:38.046 14:20:30 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.046 14:20:30 -- bdev/bdev_raid.sh@709 -- # killprocess 132929 00:18:38.046 14:20:30 -- common/autotest_common.sh@936 -- # '[' -z 132929 ']' 00:18:38.046 14:20:30 -- common/autotest_common.sh@940 -- # kill -0 132929 00:18:38.046 14:20:30 -- common/autotest_common.sh@941 -- # uname 00:18:38.046 14:20:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.046 14:20:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132929 00:18:38.046 14:20:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:38.046 14:20:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:38.046 killing process with pid 132929 00:18:38.046 14:20:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132929' 00:18:38.046 Received shutdown signal, test time was about 60.000000 seconds 00:18:38.046 00:18:38.046 Latency(us) 00:18:38.046 [2024-11-18T14:20:30.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.046 [2024-11-18T14:20:30.120Z] =================================================================================================================== 00:18:38.046 [2024-11-18T14:20:30.120Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:38.046 14:20:30 -- common/autotest_common.sh@955 -- # kill 132929 00:18:38.046 [2024-11-18 14:20:30.052826] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.046 14:20:30 -- common/autotest_common.sh@960 -- # wait 132929 00:18:38.046 [2024-11-18 14:20:30.052890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.046 [2024-11-18 14:20:30.052940] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.046 [2024-11-18 14:20:30.052951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:18:38.046 [2024-11-18 14:20:30.083249] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:38.615 00:18:38.615 real 0m23.039s 00:18:38.615 user 0m33.632s 00:18:38.615 sys 0m3.462s 00:18:38.615 14:20:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:38.615 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:18:38.615 ************************************ 00:18:38.615 END TEST raid_rebuild_test_sb 00:18:38.615 ************************************ 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:18:38.615 14:20:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:38.615 14:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.615 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:18:38.615 ************************************ 00:18:38.615 START TEST raid_rebuild_test_io 00:18:38.615 ************************************ 00:18:38.615 14:20:30 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=133537 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133537 /var/tmp/spdk-raid.sock 00:18:38.615 14:20:30 -- common/autotest_common.sh@829 -- # '[' -z 133537 ']' 00:18:38.615 14:20:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:38.615 14:20:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:38.615 14:20:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:38.615 14:20:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:38.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:38.615 14:20:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:38.615 14:20:30 -- common/autotest_common.sh@10 -- # set +x 00:18:38.615 [2024-11-18 14:20:30.535027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:38.615 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:38.615 Zero copy mechanism will not be used. 00:18:38.615 [2024-11-18 14:20:30.535227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133537 ] 00:18:38.615 [2024-11-18 14:20:30.673179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.874 [2024-11-18 14:20:30.746203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.874 [2024-11-18 14:20:30.815965] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:39.441 14:20:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.441 14:20:31 -- common/autotest_common.sh@862 -- # return 0 00:18:39.441 14:20:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:39.441 14:20:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:39.441 14:20:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:39.700 BaseBdev1 00:18:39.700 14:20:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:39.700 14:20:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:39.700 14:20:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:39.959 BaseBdev2 00:18:39.959 14:20:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:40.536 spare_malloc 00:18:40.536 14:20:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:40.536 spare_delay 00:18:40.536 14:20:32 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:40.794 [2024-11-18 14:20:32.670235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.794 [2024-11-18 14:20:32.670347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.794 [2024-11-18 14:20:32.670402] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:40.794 [2024-11-18 14:20:32.670449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.794 [2024-11-18 14:20:32.672880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.794 [2024-11-18 14:20:32.672941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.794 spare 00:18:40.794 14:20:32 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:41.053 [2024-11-18 14:20:32.870291] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.053 [2024-11-18 14:20:32.872327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.053 [2024-11-18 14:20:32.872421] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:41.053 [2024-11-18 14:20:32.872435] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:41.053 [2024-11-18 14:20:32.872573] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:41.053 [2024-11-18 14:20:32.872960] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:41.053 [2024-11-18 14:20:32.872981] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:18:41.053 [2024-11-18 14:20:32.873139] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.053 14:20:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.053 14:20:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.054 "name": "raid_bdev1", 00:18:41.054 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:41.054 "strip_size_kb": 0, 00:18:41.054 "state": "online", 00:18:41.054 "raid_level": "raid1", 00:18:41.054 "superblock": false, 00:18:41.054 "num_base_bdevs": 2, 00:18:41.054 "num_base_bdevs_discovered": 2, 00:18:41.054 "num_base_bdevs_operational": 2, 00:18:41.054 "base_bdevs_list": [ 00:18:41.054 { 00:18:41.054 "name": "BaseBdev1", 00:18:41.054 "uuid": "58ccc417-5d03-415f-af73-04ae9ee4be25", 00:18:41.054 "is_configured": true, 00:18:41.054 "data_offset": 0, 00:18:41.054 "data_size": 65536 00:18:41.054 }, 00:18:41.054 { 00:18:41.054 "name": "BaseBdev2", 00:18:41.054 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:41.054 "is_configured": true, 00:18:41.054 "data_offset": 0, 00:18:41.054 "data_size": 65536 00:18:41.054 } 00:18:41.054 ] 00:18:41.054 }' 00:18:41.054 14:20:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.054 14:20:33 -- common/autotest_common.sh@10 -- # set +x 00:18:42.010 14:20:33 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:42.010 14:20:33 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:42.010 [2024-11-18 14:20:33.946621] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.010 14:20:33 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:18:42.010 14:20:33 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:42.010 14:20:33 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.296 14:20:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:18:42.296 14:20:34 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:18:42.296 14:20:34 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:42.296 14:20:34 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:42.296 [2024-11-18 14:20:34.317667] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:18:42.296 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:42.296 Zero copy mechanism will not be used. 00:18:42.296 Running I/O for 60 seconds... 00:18:42.566 [2024-11-18 14:20:34.442358] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:42.567 [2024-11-18 14:20:34.442625] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.567 14:20:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.826 14:20:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.826 "name": "raid_bdev1", 00:18:42.826 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:42.826 "strip_size_kb": 0, 00:18:42.826 "state": "online", 00:18:42.826 "raid_level": "raid1", 00:18:42.826 "superblock": false, 00:18:42.826 "num_base_bdevs": 2, 00:18:42.826 "num_base_bdevs_discovered": 1, 00:18:42.826 "num_base_bdevs_operational": 1, 00:18:42.826 "base_bdevs_list": [ 00:18:42.826 { 00:18:42.826 "name": null, 00:18:42.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.826 "is_configured": false, 00:18:42.826 "data_offset": 0, 00:18:42.826 "data_size": 65536 00:18:42.826 }, 00:18:42.826 { 00:18:42.826 "name": "BaseBdev2", 00:18:42.826 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:42.826 "is_configured": true, 00:18:42.826 "data_offset": 0, 00:18:42.826 "data_size": 65536 00:18:42.826 } 00:18:42.826 ] 00:18:42.826 }' 00:18:42.826 14:20:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.826 14:20:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.393 14:20:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:43.652 [2024-11-18 14:20:35.550620] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:43.652 [2024-11-18 14:20:35.550667] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.652 14:20:35 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:18:43.653 [2024-11-18 14:20:35.590665] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:18:43.653 [2024-11-18 14:20:35.592748] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.653 [2024-11-18 14:20:35.712364] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:43.653 [2024-11-18 14:20:35.712768] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:43.911 [2024-11-18 14:20:35.825666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:43.911 [2024-11-18 14:20:35.825788] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:44.479 [2024-11-18 14:20:36.292817] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:44.479 [2024-11-18 14:20:36.292984] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.737 14:20:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.737 [2024-11-18 14:20:36.745357] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:44.737 [2024-11-18 14:20:36.745522] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:44.996 14:20:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:44.996 "name": "raid_bdev1", 00:18:44.996 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:44.996 "strip_size_kb": 0, 00:18:44.996 "state": "online", 00:18:44.996 "raid_level": "raid1", 00:18:44.996 "superblock": false, 00:18:44.996 "num_base_bdevs": 2, 00:18:44.996 "num_base_bdevs_discovered": 2, 00:18:44.996 "num_base_bdevs_operational": 2, 00:18:44.996 "process": { 00:18:44.996 "type": "rebuild", 00:18:44.996 "target": "spare", 00:18:44.996 "progress": { 00:18:44.996 "blocks": 16384, 00:18:44.996 "percent": 25 00:18:44.996 } 00:18:44.996 }, 00:18:44.996 "base_bdevs_list": [ 00:18:44.996 { 00:18:44.996 "name": "spare", 00:18:44.996 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:44.996 "is_configured": true, 00:18:44.996 "data_offset": 0, 00:18:44.996 "data_size": 65536 00:18:44.996 }, 00:18:44.996 { 00:18:44.996 "name": "BaseBdev2", 00:18:44.996 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:44.996 "is_configured": true, 00:18:44.996 "data_offset": 0, 00:18:44.996 "data_size": 65536 00:18:44.996 } 00:18:44.996 ] 00:18:44.996 }' 00:18:44.996 14:20:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:44.996 14:20:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.996 14:20:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:44.996 14:20:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.996 14:20:36 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:45.255 [2024-11-18 14:20:37.167086] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.255 [2024-11-18 14:20:37.167171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:45.255 [2024-11-18 14:20:37.167372] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:45.255 [2024-11-18 14:20:37.273997] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:45.255 [2024-11-18 14:20:37.281428] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.255 [2024-11-18 14:20:37.295314] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.255 14:20:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.514 14:20:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.514 "name": "raid_bdev1", 00:18:45.514 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:45.514 "strip_size_kb": 0, 00:18:45.514 "state": "online", 00:18:45.514 "raid_level": "raid1", 00:18:45.514 "superblock": false, 00:18:45.514 "num_base_bdevs": 2, 00:18:45.514 "num_base_bdevs_discovered": 1, 00:18:45.514 "num_base_bdevs_operational": 1, 00:18:45.514 "base_bdevs_list": [ 00:18:45.514 { 00:18:45.514 "name": null, 00:18:45.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.514 "is_configured": false, 00:18:45.514 "data_offset": 0, 00:18:45.514 "data_size": 65536 00:18:45.514 }, 00:18:45.514 { 00:18:45.514 "name": "BaseBdev2", 00:18:45.514 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:45.514 "is_configured": true, 00:18:45.514 "data_offset": 0, 00:18:45.514 "data_size": 65536 00:18:45.514 } 00:18:45.514 ] 00:18:45.514 }' 00:18:45.514 14:20:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.514 14:20:37 -- common/autotest_common.sh@10 -- # set +x 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.082 14:20:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.649 14:20:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:46.649 "name": "raid_bdev1", 00:18:46.649 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:46.649 "strip_size_kb": 0, 00:18:46.649 "state": "online", 00:18:46.649 "raid_level": "raid1", 00:18:46.649 "superblock": false, 00:18:46.649 "num_base_bdevs": 2, 00:18:46.649 "num_base_bdevs_discovered": 1, 00:18:46.649 "num_base_bdevs_operational": 1, 00:18:46.649 "base_bdevs_list": [ 00:18:46.649 { 00:18:46.649 "name": null, 00:18:46.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.649 "is_configured": false, 00:18:46.649 "data_offset": 0, 00:18:46.649 "data_size": 65536 00:18:46.649 }, 00:18:46.649 { 00:18:46.649 "name": "BaseBdev2", 00:18:46.649 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:46.649 "is_configured": true, 00:18:46.649 "data_offset": 0, 00:18:46.649 "data_size": 65536 00:18:46.649 } 00:18:46.649 ] 00:18:46.649 }' 00:18:46.649 14:20:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:46.649 14:20:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:46.649 14:20:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:46.649 14:20:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:46.649 14:20:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:46.909 [2024-11-18 14:20:38.756171] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:18:46.909 [2024-11-18 14:20:38.756223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.909 14:20:38 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:18:46.909 [2024-11-18 14:20:38.805939] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:46.909 [2024-11-18 14:20:38.807843] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:46.909 [2024-11-18 14:20:38.915448] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:46.909 [2024-11-18 14:20:38.915825] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:47.167 [2024-11-18 14:20:39.123849] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:47.167 [2024-11-18 14:20:39.123979] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:47.426 [2024-11-18 14:20:39.457968] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:47.684 [2024-11-18 14:20:39.570642] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.943 14:20:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.943 [2024-11-18 14:20:40.010304] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:47.943 [2024-11-18 14:20:40.010473] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:48.202 "name": "raid_bdev1", 00:18:48.202 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:48.202 "strip_size_kb": 0, 00:18:48.202 "state": "online", 00:18:48.202 "raid_level": "raid1", 00:18:48.202 "superblock": false, 00:18:48.202 "num_base_bdevs": 2, 00:18:48.202 "num_base_bdevs_discovered": 2, 00:18:48.202 "num_base_bdevs_operational": 2, 00:18:48.202 "process": { 00:18:48.202 "type": "rebuild", 00:18:48.202 "target": "spare", 00:18:48.202 "progress": { 00:18:48.202 "blocks": 16384, 00:18:48.202 "percent": 25 00:18:48.202 } 00:18:48.202 }, 00:18:48.202 "base_bdevs_list": [ 00:18:48.202 { 00:18:48.202 "name": "spare", 00:18:48.202 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:48.202 "is_configured": true, 00:18:48.202 "data_offset": 0, 00:18:48.202 "data_size": 65536 00:18:48.202 }, 00:18:48.202 { 00:18:48.202 "name": "BaseBdev2", 00:18:48.202 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:48.202 "is_configured": true, 00:18:48.202 "data_offset": 0, 00:18:48.202 "data_size": 65536 00:18:48.202 } 00:18:48.202 ] 00:18:48.202 }' 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@657 -- # local timeout=396 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.202 14:20:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.460 14:20:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:48.461 "name": "raid_bdev1", 00:18:48.461 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:48.461 "strip_size_kb": 0, 00:18:48.461 "state": "online", 00:18:48.461 "raid_level": "raid1", 00:18:48.461 "superblock": false, 00:18:48.461 "num_base_bdevs": 2, 00:18:48.461 "num_base_bdevs_discovered": 2, 00:18:48.461 "num_base_bdevs_operational": 2, 00:18:48.461 "process": { 00:18:48.461 "type": "rebuild", 00:18:48.461 "target": "spare", 00:18:48.461 "progress": { 00:18:48.461 "blocks": 18432, 00:18:48.461 "percent": 28 00:18:48.461 } 00:18:48.461 }, 00:18:48.461 "base_bdevs_list": [ 00:18:48.461 { 00:18:48.461 "name": "spare", 00:18:48.461 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:48.461 "is_configured": true, 00:18:48.461 "data_offset": 0, 00:18:48.461 "data_size": 65536 00:18:48.461 }, 00:18:48.461 { 00:18:48.461 "name": "BaseBdev2", 00:18:48.461 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:48.461 "is_configured": true, 00:18:48.461 "data_offset": 0, 00:18:48.461 "data_size": 65536 00:18:48.461 } 00:18:48.461 ] 00:18:48.461 }' 00:18:48.461 14:20:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:48.461 14:20:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.461 [2024-11-18 14:20:40.388362] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:48.461 14:20:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:48.461 14:20:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.461 14:20:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:48.461 [2024-11-18 14:20:40.502198] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:48.461 [2024-11-18 14:20:40.502324] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:49.028 [2024-11-18 14:20:40.812162] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:49.028 [2024-11-18 14:20:40.925859] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:49.286 [2024-11-18 14:20:41.147204] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:49.546 [2024-11-18 14:20:41.374463] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.546 14:20:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.804 14:20:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:49.805 "name": "raid_bdev1", 00:18:49.805 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:49.805 "strip_size_kb": 0, 00:18:49.805 "state": "online", 00:18:49.805 "raid_level": "raid1", 00:18:49.805 "superblock": false, 00:18:49.805 "num_base_bdevs": 2, 00:18:49.805 "num_base_bdevs_discovered": 2, 00:18:49.805 "num_base_bdevs_operational": 2, 00:18:49.805 "process": { 00:18:49.805 "type": "rebuild", 00:18:49.805 "target": "spare", 00:18:49.805 "progress": { 00:18:49.805 "blocks": 36864, 00:18:49.805 "percent": 56 00:18:49.805 } 00:18:49.805 }, 00:18:49.805 "base_bdevs_list": [ 00:18:49.805 { 00:18:49.805 "name": "spare", 00:18:49.805 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:49.805 "is_configured": true, 00:18:49.805 "data_offset": 0, 00:18:49.805 "data_size": 65536 00:18:49.805 }, 00:18:49.805 { 00:18:49.805 "name": "BaseBdev2", 00:18:49.805 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:49.805 "is_configured": true, 00:18:49.805 "data_offset": 0, 00:18:49.805 "data_size": 65536 00:18:49.805 } 00:18:49.805 ] 00:18:49.805 }' 00:18:49.805 14:20:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:49.805 14:20:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.805 14:20:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:49.805 14:20:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.805 14:20:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:49.805 [2024-11-18 14:20:41.854309] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:50.372 [2024-11-18 14:20:42.180631] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:50.372 [2024-11-18 14:20:42.180887] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:50.631 [2024-11-18 14:20:42.524764] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.890 14:20:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.149 14:20:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:51.149 "name": "raid_bdev1", 00:18:51.149 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:51.149 "strip_size_kb": 0, 00:18:51.149 "state": "online", 00:18:51.149 "raid_level": "raid1", 00:18:51.149 "superblock": false, 00:18:51.149 "num_base_bdevs": 2, 00:18:51.149 "num_base_bdevs_discovered": 2, 00:18:51.149 "num_base_bdevs_operational": 2, 00:18:51.149 "process": { 00:18:51.149 "type": "rebuild", 00:18:51.149 "target": "spare", 00:18:51.149 "progress": { 00:18:51.149 "blocks": 57344, 00:18:51.149 "percent": 87 00:18:51.149 } 00:18:51.149 }, 00:18:51.149 "base_bdevs_list": [ 00:18:51.149 { 00:18:51.149 "name": "spare", 00:18:51.149 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:51.149 "is_configured": true, 00:18:51.149 "data_offset": 0, 00:18:51.149 "data_size": 65536 00:18:51.149 }, 00:18:51.149 { 00:18:51.149 "name": "BaseBdev2", 00:18:51.149 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:51.149 "is_configured": true, 00:18:51.149 "data_offset": 0, 00:18:51.149 "data_size": 65536 00:18:51.149 } 00:18:51.149 ] 00:18:51.149 }' 00:18:51.149 14:20:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:51.149 14:20:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.149 14:20:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:51.149 14:20:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.149 14:20:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:51.408 [2024-11-18 14:20:43.400091] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:51.666 [2024-11-18 14:20:43.500076] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:51.666 [2024-11-18 14:20:43.501126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.233 14:20:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:52.492 "name": "raid_bdev1", 00:18:52.492 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:52.492 "strip_size_kb": 0, 00:18:52.492 "state": "online", 00:18:52.492 "raid_level": "raid1", 00:18:52.492 "superblock": false, 00:18:52.492 "num_base_bdevs": 2, 00:18:52.492 "num_base_bdevs_discovered": 2, 00:18:52.492 "num_base_bdevs_operational": 2, 00:18:52.492 "base_bdevs_list": [ 00:18:52.492 { 00:18:52.492 "name": "spare", 00:18:52.492 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:52.492 "is_configured": true, 00:18:52.492 "data_offset": 0, 00:18:52.492 "data_size": 65536 00:18:52.492 }, 00:18:52.492 { 00:18:52.492 "name": "BaseBdev2", 00:18:52.492 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:52.492 "is_configured": true, 00:18:52.492 "data_offset": 0, 00:18:52.492 "data_size": 65536 00:18:52.492 } 00:18:52.492 ] 00:18:52.492 }' 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@660 -- # break 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.492 14:20:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.751 14:20:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:18:52.751 "name": "raid_bdev1", 00:18:52.751 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:52.751 "strip_size_kb": 0, 00:18:52.751 "state": "online", 00:18:52.751 "raid_level": "raid1", 00:18:52.751 "superblock": false, 00:18:52.751 "num_base_bdevs": 2, 00:18:52.751 "num_base_bdevs_discovered": 2, 00:18:52.751 "num_base_bdevs_operational": 2, 00:18:52.751 "base_bdevs_list": [ 00:18:52.751 { 00:18:52.751 "name": "spare", 00:18:52.751 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:52.751 "is_configured": true, 00:18:52.751 "data_offset": 0, 00:18:52.751 "data_size": 65536 00:18:52.751 }, 00:18:52.751 { 00:18:52.751 "name": "BaseBdev2", 00:18:52.751 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:52.751 "is_configured": true, 00:18:52.751 "data_offset": 0, 00:18:52.751 "data_size": 65536 00:18:52.751 } 00:18:52.751 ] 00:18:52.751 }' 00:18:52.751 14:20:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:18:52.751 14:20:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:52.751 14:20:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.059 14:20:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.059 14:20:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.059 "name": "raid_bdev1", 00:18:53.059 "uuid": "0c64216f-d8c9-4cbf-8c7b-badea5a3f69b", 00:18:53.059 "strip_size_kb": 0, 00:18:53.059 "state": "online", 00:18:53.059 "raid_level": "raid1", 00:18:53.059 "superblock": false, 00:18:53.059 "num_base_bdevs": 2, 00:18:53.059 "num_base_bdevs_discovered": 2, 00:18:53.059 "num_base_bdevs_operational": 2, 00:18:53.059 "base_bdevs_list": [ 00:18:53.059 { 00:18:53.059 "name": "spare", 00:18:53.059 "uuid": "5db06455-672c-5da5-9cef-77c64214245b", 00:18:53.059 "is_configured": true, 00:18:53.059 "data_offset": 0, 00:18:53.059 "data_size": 65536 00:18:53.059 }, 00:18:53.059 { 00:18:53.059 "name": "BaseBdev2", 00:18:53.059 "uuid": "1003ddb9-7a0d-40b4-acfb-10613a19baa3", 00:18:53.059 "is_configured": true, 00:18:53.059 "data_offset": 0, 00:18:53.059 "data_size": 65536 00:18:53.059 } 00:18:53.059 ] 00:18:53.059 }' 00:18:53.059 14:20:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.059 14:20:45 -- common/autotest_common.sh@10 -- # set +x 00:18:53.626 14:20:45 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:53.885 [2024-11-18 14:20:45.927327] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:53.885 [2024-11-18 14:20:45.927381] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.144 00:18:54.144 Latency(us) 00:18:54.144 [2024-11-18T14:20:46.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.144 [2024-11-18T14:20:46.218Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:54.144 raid_bdev1 : 11.64 117.06 351.19 0.00 0.00 11873.23 279.27 113436.86 00:18:54.144 [2024-11-18T14:20:46.218Z] =================================================================================================================== 00:18:54.144 [2024-11-18T14:20:46.218Z] Total : 117.06 351.19 0.00 0.00 11873.23 279.27 113436.86 00:18:54.144 [2024-11-18 14:20:45.966730] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.144 [2024-11-18 14:20:45.966775] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.144 [2024-11-18 14:20:45.966855] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.144 [2024-11-18 14:20:45.966870] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:18:54.144 0 00:18:54.144 14:20:45 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.144 14:20:45 -- bdev/bdev_raid.sh@671 -- # jq length 00:18:54.404 14:20:46 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:18:54.404 14:20:46 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:18:54.404 14:20:46 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@12 -- # local i 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:18:54.404 /dev/nbd0 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:54.404 14:20:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:18:54.404 14:20:46 -- common/autotest_common.sh@867 -- # local i 00:18:54.404 14:20:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:54.404 14:20:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:54.404 14:20:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:18:54.404 14:20:46 -- common/autotest_common.sh@871 -- # break 00:18:54.404 14:20:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:54.404 14:20:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:54.404 14:20:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:54.404 1+0 records in 00:18:54.404 1+0 records out 00:18:54.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444152 s, 9.2 MB/s 00:18:54.404 14:20:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.404 14:20:46 -- common/autotest_common.sh@884 -- # size=4096 00:18:54.404 14:20:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.404 14:20:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:54.404 14:20:46 -- common/autotest_common.sh@887 -- # return 0 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.404 14:20:46 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:18:54.404 14:20:46 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:18:54.404 14:20:46 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@12 -- # local i 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.404 14:20:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:54.663 /dev/nbd1 00:18:54.663 14:20:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:54.663 14:20:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:54.663 14:20:46 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:18:54.663 14:20:46 -- common/autotest_common.sh@867 -- # local i 00:18:54.663 14:20:46 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:18:54.663 14:20:46 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:18:54.663 14:20:46 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:18:54.663 14:20:46 -- common/autotest_common.sh@871 -- # break 00:18:54.663 14:20:46 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:18:54.663 14:20:46 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:18:54.663 14:20:46 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:54.663 1+0 records in 00:18:54.663 1+0 records out 00:18:54.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281742 s, 14.5 MB/s 00:18:54.922 14:20:46 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.922 14:20:46 -- common/autotest_common.sh@884 -- # size=4096 00:18:54.922 14:20:46 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.922 14:20:46 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:18:54.922 14:20:46 -- common/autotest_common.sh@887 -- # return 0 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.922 14:20:46 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:54.922 14:20:46 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@51 -- # local i 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.922 14:20:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@41 -- # break 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:18:55.180 14:20:47 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@51 -- # local i 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:55.180 14:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:55.181 14:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:55.181 14:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:55.181 14:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:55.181 14:20:47 -- bdev/nbd_common.sh@41 -- # break 00:18:55.181 14:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:18:55.181 14:20:47 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:18:55.181 14:20:47 -- bdev/bdev_raid.sh@709 -- # killprocess 133537 00:18:55.181 14:20:47 -- common/autotest_common.sh@936 -- # '[' -z 133537 ']' 00:18:55.181 14:20:47 -- common/autotest_common.sh@940 -- # kill -0 133537 00:18:55.181 14:20:47 -- common/autotest_common.sh@941 -- # uname 00:18:55.181 14:20:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:55.181 14:20:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133537 00:18:55.438 14:20:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:55.438 14:20:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:55.438 killing process with pid 133537 00:18:55.438 14:20:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133537' 00:18:55.438 14:20:47 -- common/autotest_common.sh@955 -- # kill 133537 00:18:55.438 Received shutdown signal, test time was about 12.939011 seconds 00:18:55.438 00:18:55.438 Latency(us) 00:18:55.438 [2024-11-18T14:20:47.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.438 [2024-11-18T14:20:47.512Z] =================================================================================================================== 00:18:55.438 [2024-11-18T14:20:47.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.438 [2024-11-18 14:20:47.258781] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:55.438 14:20:47 -- common/autotest_common.sh@960 -- # wait 133537 00:18:55.438 [2024-11-18 14:20:47.284462] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:18:55.697 00:18:55.697 real 0m17.109s 00:18:55.697 user 0m26.823s 00:18:55.697 sys 0m1.753s 00:18:55.697 14:20:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:55.697 14:20:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.697 ************************************ 00:18:55.697 END TEST raid_rebuild_test_io 00:18:55.697 ************************************ 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:18:55.697 14:20:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:55.697 14:20:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.697 14:20:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.697 ************************************ 00:18:55.697 START TEST raid_rebuild_test_sb_io 00:18:55.697 ************************************ 00:18:55.697 14:20:47 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:55.697 14:20:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=134012 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134012 /var/tmp/spdk-raid.sock 00:18:55.698 14:20:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:55.698 14:20:47 -- common/autotest_common.sh@829 -- # '[' -z 134012 ']' 00:18:55.698 14:20:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:55.698 14:20:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.698 14:20:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:55.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:55.698 14:20:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.698 14:20:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.698 [2024-11-18 14:20:47.702081] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:55.698 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:55.698 Zero copy mechanism will not be used. 00:18:55.698 [2024-11-18 14:20:47.702270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134012 ] 00:18:55.956 [2024-11-18 14:20:47.840943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.956 [2024-11-18 14:20:47.909933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.956 [2024-11-18 14:20:47.979520] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.892 14:20:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.892 14:20:48 -- common/autotest_common.sh@862 -- # return 0 00:18:56.892 14:20:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:56.892 14:20:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:56.892 14:20:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:56.892 BaseBdev1_malloc 00:18:56.892 14:20:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:57.151 [2024-11-18 14:20:49.091476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:57.151 [2024-11-18 14:20:49.091586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.151 [2024-11-18 14:20:49.091635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:57.151 [2024-11-18 14:20:49.091685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.151 [2024-11-18 14:20:49.094129] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.151 [2024-11-18 14:20:49.094187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:57.151 BaseBdev1 00:18:57.151 14:20:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:57.151 14:20:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:18:57.151 14:20:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:57.408 BaseBdev2_malloc 00:18:57.408 14:20:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:57.667 [2024-11-18 14:20:49.536900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:57.667 [2024-11-18 14:20:49.536963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.667 [2024-11-18 14:20:49.536998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:57.667 [2024-11-18 14:20:49.537040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.667 [2024-11-18 14:20:49.539270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.667 [2024-11-18 14:20:49.539320] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:57.667 BaseBdev2 00:18:57.667 14:20:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:57.925 spare_malloc 00:18:57.925 14:20:49 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:57.925 spare_delay 00:18:57.925 14:20:49 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:58.189 [2024-11-18 14:20:50.165991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:58.189 [2024-11-18 14:20:50.166053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.189 [2024-11-18 14:20:50.166095] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:58.189 [2024-11-18 14:20:50.166137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.189 [2024-11-18 14:20:50.168985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.189 [2024-11-18 14:20:50.169043] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:58.189 spare 00:18:58.189 14:20:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:58.447 [2024-11-18 14:20:50.358121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:58.448 [2024-11-18 14:20:50.360137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.448 [2024-11-18 14:20:50.360351] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:18:58.448 [2024-11-18 14:20:50.360366] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:58.448 [2024-11-18 14:20:50.360504] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:18:58.448 [2024-11-18 14:20:50.360883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:18:58.448 [2024-11-18 14:20:50.360905] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:18:58.448 [2024-11-18 14:20:50.361051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.448 14:20:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.707 14:20:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.707 "name": "raid_bdev1", 00:18:58.707 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:18:58.707 "strip_size_kb": 0, 00:18:58.707 "state": "online", 00:18:58.707 "raid_level": "raid1", 00:18:58.707 "superblock": true, 00:18:58.707 "num_base_bdevs": 2, 00:18:58.707 "num_base_bdevs_discovered": 2, 00:18:58.707 "num_base_bdevs_operational": 2, 00:18:58.707 "base_bdevs_list": [ 00:18:58.707 { 00:18:58.707 "name": "BaseBdev1", 00:18:58.707 "uuid": "ccc4439f-6771-5f22-931c-4ead67d2dcfa", 00:18:58.707 "is_configured": true, 00:18:58.707 "data_offset": 2048, 00:18:58.707 "data_size": 63488 00:18:58.707 }, 00:18:58.707 { 00:18:58.707 "name": "BaseBdev2", 00:18:58.707 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:18:58.707 "is_configured": true, 00:18:58.707 "data_offset": 2048, 00:18:58.707 "data_size": 63488 00:18:58.707 } 00:18:58.707 ] 00:18:58.707 }' 00:18:58.707 14:20:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.707 14:20:50 -- common/autotest_common.sh@10 -- # set +x 00:18:59.275 14:20:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:59.275 14:20:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:18:59.534 [2024-11-18 14:20:51.386339] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.534 14:20:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:18:59.534 14:20:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.534 14:20:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:59.793 14:20:51 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:18:59.793 14:20:51 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:18:59.793 14:20:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:59.793 14:20:51 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:59.793 [2024-11-18 14:20:51.769419] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:18:59.793 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:59.793 Zero copy mechanism will not be used. 00:18:59.793 Running I/O for 60 seconds... 00:19:00.052 [2024-11-18 14:20:51.893524] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:00.052 [2024-11-18 14:20:51.899476] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.052 14:20:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.310 14:20:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.310 "name": "raid_bdev1", 00:19:00.310 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:00.310 "strip_size_kb": 0, 00:19:00.310 "state": "online", 00:19:00.310 "raid_level": "raid1", 00:19:00.310 "superblock": true, 00:19:00.310 "num_base_bdevs": 2, 00:19:00.310 "num_base_bdevs_discovered": 1, 00:19:00.310 "num_base_bdevs_operational": 1, 00:19:00.310 "base_bdevs_list": [ 00:19:00.310 { 00:19:00.310 "name": null, 00:19:00.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.310 "is_configured": false, 00:19:00.310 "data_offset": 2048, 00:19:00.310 "data_size": 63488 00:19:00.310 }, 00:19:00.310 { 00:19:00.310 "name": "BaseBdev2", 00:19:00.310 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:00.310 "is_configured": true, 00:19:00.310 "data_offset": 2048, 00:19:00.310 "data_size": 63488 00:19:00.310 } 00:19:00.310 ] 00:19:00.310 }' 00:19:00.310 14:20:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.310 14:20:52 -- common/autotest_common.sh@10 -- # set +x 00:19:00.878 14:20:52 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.136 [2024-11-18 14:20:52.968393] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:01.136 [2024-11-18 14:20:52.968447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.136 14:20:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:01.136 [2024-11-18 14:20:53.008151] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:01.136 [2024-11-18 14:20:53.010255] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:01.136 [2024-11-18 14:20:53.135444] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:01.136 [2024-11-18 14:20:53.135842] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:01.396 [2024-11-18 14:20:53.370446] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:01.964 [2024-11-18 14:20:53.828612] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:01.964 [2024-11-18 14:20:53.828779] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.964 14:20:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.223 14:20:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:02.223 "name": "raid_bdev1", 00:19:02.223 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:02.223 "strip_size_kb": 0, 00:19:02.223 "state": "online", 00:19:02.223 "raid_level": "raid1", 00:19:02.223 "superblock": true, 00:19:02.223 "num_base_bdevs": 2, 00:19:02.223 "num_base_bdevs_discovered": 2, 00:19:02.223 "num_base_bdevs_operational": 2, 00:19:02.223 "process": { 00:19:02.223 "type": "rebuild", 00:19:02.223 "target": "spare", 00:19:02.223 "progress": { 00:19:02.223 "blocks": 14336, 00:19:02.223 "percent": 22 00:19:02.223 } 00:19:02.223 }, 00:19:02.223 "base_bdevs_list": [ 00:19:02.223 { 00:19:02.223 "name": "spare", 00:19:02.223 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:02.223 "is_configured": true, 00:19:02.223 "data_offset": 2048, 00:19:02.223 "data_size": 63488 00:19:02.223 }, 00:19:02.223 { 00:19:02.223 "name": "BaseBdev2", 00:19:02.223 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:02.223 "is_configured": true, 00:19:02.223 "data_offset": 2048, 00:19:02.223 "data_size": 63488 00:19:02.223 } 00:19:02.223 ] 00:19:02.223 }' 00:19:02.223 14:20:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:02.223 [2024-11-18 14:20:54.269911] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:02.482 14:20:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.482 14:20:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:02.482 14:20:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.482 14:20:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:02.482 [2024-11-18 14:20:54.526571] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.741 [2024-11-18 14:20:54.593111] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:02.741 [2024-11-18 14:20:54.698907] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:02.741 [2024-11-18 14:20:54.706233] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.741 [2024-11-18 14:20:54.714538] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.741 14:20:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.000 14:20:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.000 "name": "raid_bdev1", 00:19:03.000 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:03.000 "strip_size_kb": 0, 00:19:03.000 "state": "online", 00:19:03.000 "raid_level": "raid1", 00:19:03.000 "superblock": true, 00:19:03.000 "num_base_bdevs": 2, 00:19:03.000 "num_base_bdevs_discovered": 1, 00:19:03.000 "num_base_bdevs_operational": 1, 00:19:03.000 "base_bdevs_list": [ 00:19:03.000 { 00:19:03.000 "name": null, 00:19:03.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.000 "is_configured": false, 00:19:03.000 "data_offset": 2048, 00:19:03.000 "data_size": 63488 00:19:03.000 }, 00:19:03.000 { 00:19:03.000 "name": "BaseBdev2", 00:19:03.000 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:03.000 "is_configured": true, 00:19:03.000 "data_offset": 2048, 00:19:03.000 "data_size": 63488 00:19:03.000 } 00:19:03.000 ] 00:19:03.000 }' 00:19:03.000 14:20:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.000 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.569 14:20:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.828 14:20:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:03.828 "name": "raid_bdev1", 00:19:03.828 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:03.828 "strip_size_kb": 0, 00:19:03.828 "state": "online", 00:19:03.828 "raid_level": "raid1", 00:19:03.828 "superblock": true, 00:19:03.828 "num_base_bdevs": 2, 00:19:03.828 "num_base_bdevs_discovered": 1, 00:19:03.828 "num_base_bdevs_operational": 1, 00:19:03.828 "base_bdevs_list": [ 00:19:03.828 { 00:19:03.828 "name": null, 00:19:03.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.828 "is_configured": false, 00:19:03.828 "data_offset": 2048, 00:19:03.828 "data_size": 63488 00:19:03.828 }, 00:19:03.828 { 00:19:03.828 "name": "BaseBdev2", 00:19:03.828 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:03.828 "is_configured": true, 00:19:03.828 "data_offset": 2048, 00:19:03.828 "data_size": 63488 00:19:03.828 } 00:19:03.828 ] 00:19:03.828 }' 00:19:03.828 14:20:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:04.089 14:20:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:04.089 14:20:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:04.089 14:20:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:04.089 14:20:55 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.089 [2024-11-18 14:20:56.133686] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:04.089 [2024-11-18 14:20:56.133729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.089 14:20:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:04.349 [2024-11-18 14:20:56.165634] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:19:04.349 [2024-11-18 14:20:56.167713] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.349 [2024-11-18 14:20:56.280933] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:04.349 [2024-11-18 14:20:56.281206] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:04.349 [2024-11-18 14:20:56.395798] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:04.349 [2024-11-18 14:20:56.395938] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:04.917 [2024-11-18 14:20:56.717885] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:19:04.917 [2024-11-18 14:20:56.844408] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:05.176 [2024-11-18 14:20:57.090646] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:05.176 [2024-11-18 14:20:57.090877] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.176 14:20:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.434 [2024-11-18 14:20:57.305609] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:05.434 14:20:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:05.434 "name": "raid_bdev1", 00:19:05.434 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:05.434 "strip_size_kb": 0, 00:19:05.434 "state": "online", 00:19:05.434 "raid_level": "raid1", 00:19:05.434 "superblock": true, 00:19:05.434 "num_base_bdevs": 2, 00:19:05.434 "num_base_bdevs_discovered": 2, 00:19:05.434 "num_base_bdevs_operational": 2, 00:19:05.434 "process": { 00:19:05.434 "type": "rebuild", 00:19:05.434 "target": "spare", 00:19:05.434 "progress": { 00:19:05.434 "blocks": 16384, 00:19:05.434 "percent": 25 00:19:05.434 } 00:19:05.434 }, 00:19:05.434 "base_bdevs_list": [ 00:19:05.434 { 00:19:05.434 "name": "spare", 00:19:05.434 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:05.434 "is_configured": true, 00:19:05.434 "data_offset": 2048, 00:19:05.435 "data_size": 63488 00:19:05.435 }, 00:19:05.435 { 00:19:05.435 "name": "BaseBdev2", 00:19:05.435 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:05.435 "is_configured": true, 00:19:05.435 "data_offset": 2048, 00:19:05.435 "data_size": 63488 00:19:05.435 } 00:19:05.435 ] 00:19:05.435 }' 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:05.435 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@657 -- # local timeout=413 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:05.435 14:20:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:05.693 14:20:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.693 14:20:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.693 [2024-11-18 14:20:57.632890] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:19:05.693 14:20:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:05.693 "name": "raid_bdev1", 00:19:05.693 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:05.693 "strip_size_kb": 0, 00:19:05.693 "state": "online", 00:19:05.693 "raid_level": "raid1", 00:19:05.693 "superblock": true, 00:19:05.693 "num_base_bdevs": 2, 00:19:05.693 "num_base_bdevs_discovered": 2, 00:19:05.693 "num_base_bdevs_operational": 2, 00:19:05.693 "process": { 00:19:05.693 "type": "rebuild", 00:19:05.693 "target": "spare", 00:19:05.693 "progress": { 00:19:05.693 "blocks": 20480, 00:19:05.693 "percent": 32 00:19:05.693 } 00:19:05.693 }, 00:19:05.693 "base_bdevs_list": [ 00:19:05.693 { 00:19:05.693 "name": "spare", 00:19:05.693 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:05.693 "is_configured": true, 00:19:05.694 "data_offset": 2048, 00:19:05.694 "data_size": 63488 00:19:05.694 }, 00:19:05.694 { 00:19:05.694 "name": "BaseBdev2", 00:19:05.694 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:05.694 "is_configured": true, 00:19:05.694 "data_offset": 2048, 00:19:05.694 "data_size": 63488 00:19:05.694 } 00:19:05.694 ] 00:19:05.694 }' 00:19:05.694 14:20:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:05.953 14:20:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.953 14:20:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:05.953 [2024-11-18 14:20:57.840718] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:05.953 14:20:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.953 [2024-11-18 14:20:57.840858] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:05.953 14:20:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:06.212 [2024-11-18 14:20:58.162098] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:06.212 [2024-11-18 14:20:58.162283] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:06.779 [2024-11-18 14:20:58.615627] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.779 14:20:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.038 14:20:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:07.038 "name": "raid_bdev1", 00:19:07.038 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:07.038 "strip_size_kb": 0, 00:19:07.038 "state": "online", 00:19:07.038 "raid_level": "raid1", 00:19:07.038 "superblock": true, 00:19:07.038 "num_base_bdevs": 2, 00:19:07.038 "num_base_bdevs_discovered": 2, 00:19:07.038 "num_base_bdevs_operational": 2, 00:19:07.038 "process": { 00:19:07.038 "type": "rebuild", 00:19:07.038 "target": "spare", 00:19:07.038 "progress": { 00:19:07.038 "blocks": 43008, 00:19:07.038 "percent": 67 00:19:07.038 } 00:19:07.038 }, 00:19:07.039 "base_bdevs_list": [ 00:19:07.039 { 00:19:07.039 "name": "spare", 00:19:07.039 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:07.039 "is_configured": true, 00:19:07.039 "data_offset": 2048, 00:19:07.039 "data_size": 63488 00:19:07.039 }, 00:19:07.039 { 00:19:07.039 "name": "BaseBdev2", 00:19:07.039 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:07.039 "is_configured": true, 00:19:07.039 "data_offset": 2048, 00:19:07.039 "data_size": 63488 00:19:07.039 } 00:19:07.039 ] 00:19:07.039 }' 00:19:07.039 14:20:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:07.296 14:20:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.296 14:20:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:07.296 14:20:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.296 14:20:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:07.555 [2024-11-18 14:20:59.478103] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:19:07.824 [2024-11-18 14:20:59.804727] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.468 [2024-11-18 14:21:00.233002] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:08.468 [2024-11-18 14:21:00.333043] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:08.468 [2024-11-18 14:21:00.334854] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:08.468 "name": "raid_bdev1", 00:19:08.468 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:08.468 "strip_size_kb": 0, 00:19:08.468 "state": "online", 00:19:08.468 "raid_level": "raid1", 00:19:08.468 "superblock": true, 00:19:08.468 "num_base_bdevs": 2, 00:19:08.468 "num_base_bdevs_discovered": 2, 00:19:08.468 "num_base_bdevs_operational": 2, 00:19:08.468 "base_bdevs_list": [ 00:19:08.468 { 00:19:08.468 "name": "spare", 00:19:08.468 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:08.468 "is_configured": true, 00:19:08.468 "data_offset": 2048, 00:19:08.468 "data_size": 63488 00:19:08.468 }, 00:19:08.468 { 00:19:08.468 "name": "BaseBdev2", 00:19:08.468 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:08.468 "is_configured": true, 00:19:08.468 "data_offset": 2048, 00:19:08.468 "data_size": 63488 00:19:08.468 } 00:19:08.468 ] 00:19:08.468 }' 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:08.468 14:21:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@660 -- # break 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:08.731 "name": "raid_bdev1", 00:19:08.731 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:08.731 "strip_size_kb": 0, 00:19:08.731 "state": "online", 00:19:08.731 "raid_level": "raid1", 00:19:08.731 "superblock": true, 00:19:08.731 "num_base_bdevs": 2, 00:19:08.731 "num_base_bdevs_discovered": 2, 00:19:08.731 "num_base_bdevs_operational": 2, 00:19:08.731 "base_bdevs_list": [ 00:19:08.731 { 00:19:08.731 "name": "spare", 00:19:08.731 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:08.731 "is_configured": true, 00:19:08.731 "data_offset": 2048, 00:19:08.731 "data_size": 63488 00:19:08.731 }, 00:19:08.731 { 00:19:08.731 "name": "BaseBdev2", 00:19:08.731 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:08.731 "is_configured": true, 00:19:08.731 "data_offset": 2048, 00:19:08.731 "data_size": 63488 00:19:08.731 } 00:19:08.731 ] 00:19:08.731 }' 00:19:08.731 14:21:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.990 14:21:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.990 14:21:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.990 "name": "raid_bdev1", 00:19:08.990 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:08.990 "strip_size_kb": 0, 00:19:08.990 "state": "online", 00:19:08.990 "raid_level": "raid1", 00:19:08.990 "superblock": true, 00:19:08.990 "num_base_bdevs": 2, 00:19:08.990 "num_base_bdevs_discovered": 2, 00:19:08.990 "num_base_bdevs_operational": 2, 00:19:08.990 "base_bdevs_list": [ 00:19:08.990 { 00:19:08.990 "name": "spare", 00:19:08.990 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:08.990 "is_configured": true, 00:19:08.990 "data_offset": 2048, 00:19:08.990 "data_size": 63488 00:19:08.990 }, 00:19:08.990 { 00:19:08.990 "name": "BaseBdev2", 00:19:08.990 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:08.990 "is_configured": true, 00:19:08.990 "data_offset": 2048, 00:19:08.990 "data_size": 63488 00:19:08.990 } 00:19:08.990 ] 00:19:08.990 }' 00:19:08.990 14:21:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.990 14:21:01 -- common/autotest_common.sh@10 -- # set +x 00:19:09.928 14:21:01 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:09.928 [2024-11-18 14:21:01.915422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:09.928 [2024-11-18 14:21:01.915476] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.928 00:19:09.928 Latency(us) 00:19:09.928 [2024-11-18T14:21:02.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.928 [2024-11-18T14:21:02.002Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:09.928 raid_bdev1 : 10.19 129.52 388.55 0.00 0.00 10378.36 281.13 114866.73 00:19:09.928 [2024-11-18T14:21:02.002Z] =================================================================================================================== 00:19:09.928 [2024-11-18T14:21:02.002Z] Total : 129.52 388.55 0.00 0.00 10378.36 281.13 114866.73 00:19:09.928 [2024-11-18 14:21:01.966797] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.928 [2024-11-18 14:21:01.966850] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.928 [2024-11-18 14:21:01.966950] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:09.928 [2024-11-18 14:21:01.966974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:19:09.928 0 00:19:09.928 14:21:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.928 14:21:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:10.187 14:21:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:10.187 14:21:02 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:19:10.187 14:21:02 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@12 -- # local i 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.187 14:21:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:10.446 /dev/nbd0 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.446 14:21:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:10.446 14:21:02 -- common/autotest_common.sh@867 -- # local i 00:19:10.446 14:21:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:10.446 14:21:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:10.446 14:21:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:10.446 14:21:02 -- common/autotest_common.sh@871 -- # break 00:19:10.446 14:21:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:10.446 14:21:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:10.446 14:21:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.446 1+0 records in 00:19:10.446 1+0 records out 00:19:10.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509503 s, 8.0 MB/s 00:19:10.446 14:21:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.446 14:21:02 -- common/autotest_common.sh@884 -- # size=4096 00:19:10.446 14:21:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.446 14:21:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:10.446 14:21:02 -- common/autotest_common.sh@887 -- # return 0 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.446 14:21:02 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:19:10.446 14:21:02 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:19:10.446 14:21:02 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@12 -- # local i 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.446 14:21:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:10.705 /dev/nbd1 00:19:10.964 14:21:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:10.964 14:21:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:10.964 14:21:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:10.964 14:21:02 -- common/autotest_common.sh@867 -- # local i 00:19:10.964 14:21:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:10.964 14:21:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:10.964 14:21:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:10.964 14:21:02 -- common/autotest_common.sh@871 -- # break 00:19:10.964 14:21:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:10.964 14:21:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:10.964 14:21:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.964 1+0 records in 00:19:10.964 1+0 records out 00:19:10.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499799 s, 8.2 MB/s 00:19:10.964 14:21:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.964 14:21:02 -- common/autotest_common.sh@884 -- # size=4096 00:19:10.964 14:21:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.964 14:21:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:10.964 14:21:02 -- common/autotest_common.sh@887 -- # return 0 00:19:10.964 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.964 14:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.964 14:21:02 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:10.964 14:21:02 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:10.964 14:21:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:10.965 14:21:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:10.965 14:21:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.965 14:21:02 -- bdev/nbd_common.sh@51 -- # local i 00:19:10.965 14:21:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.965 14:21:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:11.223 14:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:11.223 14:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:11.223 14:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:11.223 14:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.223 14:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@41 -- # break 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.224 14:21:03 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@51 -- # local i 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.224 14:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@41 -- # break 00:19:11.482 14:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.482 14:21:03 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:11.482 14:21:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:11.482 14:21:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:11.482 14:21:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:11.741 14:21:03 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.741 [2024-11-18 14:21:03.808555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.741 [2024-11-18 14:21:03.808651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.741 [2024-11-18 14:21:03.808686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:11.741 [2024-11-18 14:21:03.808715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.741 [2024-11-18 14:21:03.811009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.741 [2024-11-18 14:21:03.811079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.741 [2024-11-18 14:21:03.811168] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:11.741 [2024-11-18 14:21:03.811235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.741 BaseBdev1 00:19:12.000 14:21:03 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:12.000 14:21:03 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:12.000 14:21:03 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:12.000 14:21:04 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:12.259 [2024-11-18 14:21:04.248671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:12.259 [2024-11-18 14:21:04.248746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.259 [2024-11-18 14:21:04.248776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:12.259 [2024-11-18 14:21:04.248803] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.259 [2024-11-18 14:21:04.249128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.259 [2024-11-18 14:21:04.249190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:12.259 [2024-11-18 14:21:04.249256] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:12.259 [2024-11-18 14:21:04.249270] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:12.259 [2024-11-18 14:21:04.249277] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:12.259 [2024-11-18 14:21:04.249304] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring 00:19:12.259 [2024-11-18 14:21:04.249345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.259 BaseBdev2 00:19:12.259 14:21:04 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:12.519 14:21:04 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:12.778 [2024-11-18 14:21:04.604758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:12.778 [2024-11-18 14:21:04.604812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.778 [2024-11-18 14:21:04.604854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:12.778 [2024-11-18 14:21:04.604875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.778 [2024-11-18 14:21:04.605243] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.778 [2024-11-18 14:21:04.605296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:12.778 [2024-11-18 14:21:04.605360] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:12.778 [2024-11-18 14:21:04.605412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:12.778 spare 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.778 [2024-11-18 14:21:04.705540] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:19:12.778 [2024-11-18 14:21:04.705561] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:12.778 [2024-11-18 14:21:04.705676] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:19:12.778 [2024-11-18 14:21:04.706045] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:19:12.778 [2024-11-18 14:21:04.706066] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:19:12.778 [2024-11-18 14:21:04.706167] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.778 14:21:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.778 "name": "raid_bdev1", 00:19:12.778 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:12.778 "strip_size_kb": 0, 00:19:12.778 "state": "online", 00:19:12.778 "raid_level": "raid1", 00:19:12.778 "superblock": true, 00:19:12.778 "num_base_bdevs": 2, 00:19:12.778 "num_base_bdevs_discovered": 2, 00:19:12.778 "num_base_bdevs_operational": 2, 00:19:12.778 "base_bdevs_list": [ 00:19:12.778 { 00:19:12.779 "name": "spare", 00:19:12.779 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:12.779 "is_configured": true, 00:19:12.779 "data_offset": 2048, 00:19:12.779 "data_size": 63488 00:19:12.779 }, 00:19:12.779 { 00:19:12.779 "name": "BaseBdev2", 00:19:12.779 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:12.779 "is_configured": true, 00:19:12.779 "data_offset": 2048, 00:19:12.779 "data_size": 63488 00:19:12.779 } 00:19:12.779 ] 00:19:12.779 }' 00:19:12.779 14:21:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.779 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:13.715 "name": "raid_bdev1", 00:19:13.715 "uuid": "e618a6d0-454a-47b8-b73b-b19ff2f2a6f5", 00:19:13.715 "strip_size_kb": 0, 00:19:13.715 "state": "online", 00:19:13.715 "raid_level": "raid1", 00:19:13.715 "superblock": true, 00:19:13.715 "num_base_bdevs": 2, 00:19:13.715 "num_base_bdevs_discovered": 2, 00:19:13.715 "num_base_bdevs_operational": 2, 00:19:13.715 "base_bdevs_list": [ 00:19:13.715 { 00:19:13.715 "name": "spare", 00:19:13.715 "uuid": "d6cda801-3893-59c8-a655-cf16f14e5dd9", 00:19:13.715 "is_configured": true, 00:19:13.715 "data_offset": 2048, 00:19:13.715 "data_size": 63488 00:19:13.715 }, 00:19:13.715 { 00:19:13.715 "name": "BaseBdev2", 00:19:13.715 "uuid": "015a4956-2dc7-524e-b82f-91fc41e5562f", 00:19:13.715 "is_configured": true, 00:19:13.715 "data_offset": 2048, 00:19:13.715 "data_size": 63488 00:19:13.715 } 00:19:13.715 ] 00:19:13.715 }' 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:13.715 14:21:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:13.973 14:21:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:13.973 14:21:05 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.973 14:21:05 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:14.231 14:21:06 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.231 14:21:06 -- bdev/bdev_raid.sh@709 -- # killprocess 134012 00:19:14.231 14:21:06 -- common/autotest_common.sh@936 -- # '[' -z 134012 ']' 00:19:14.231 14:21:06 -- common/autotest_common.sh@940 -- # kill -0 134012 00:19:14.231 14:21:06 -- common/autotest_common.sh@941 -- # uname 00:19:14.231 14:21:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:14.231 14:21:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134012 00:19:14.231 killing process with pid 134012 00:19:14.231 Received shutdown signal, test time was about 14.309889 seconds 00:19:14.231 00:19:14.231 Latency(us) 00:19:14.231 [2024-11-18T14:21:06.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.231 [2024-11-18T14:21:06.305Z] =================================================================================================================== 00:19:14.231 [2024-11-18T14:21:06.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.231 14:21:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:14.231 14:21:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:14.231 14:21:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134012' 00:19:14.231 14:21:06 -- common/autotest_common.sh@955 -- # kill 134012 00:19:14.231 [2024-11-18 14:21:06.081497] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.231 14:21:06 -- common/autotest_common.sh@960 -- # wait 134012 00:19:14.231 [2024-11-18 14:21:06.081562] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.231 [2024-11-18 14:21:06.081622] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.231 [2024-11-18 14:21:06.081634] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:19:14.231 [2024-11-18 14:21:06.110092] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.489 ************************************ 00:19:14.489 END TEST raid_rebuild_test_sb_io 00:19:14.489 ************************************ 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:14.489 00:19:14.489 real 0m18.768s 00:19:14.489 user 0m30.990s 00:19:14.489 sys 0m1.958s 00:19:14.489 14:21:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:14.489 14:21:06 -- common/autotest_common.sh@10 -- # set +x 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:19:14.489 14:21:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:14.489 14:21:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:14.489 14:21:06 -- common/autotest_common.sh@10 -- # set +x 00:19:14.489 ************************************ 00:19:14.489 START TEST raid_rebuild_test 00:19:14.489 ************************************ 00:19:14.489 14:21:06 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@544 -- # raid_pid=134546 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134546 /var/tmp/spdk-raid.sock 00:19:14.489 14:21:06 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:14.489 14:21:06 -- common/autotest_common.sh@829 -- # '[' -z 134546 ']' 00:19:14.489 14:21:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:14.489 14:21:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.489 14:21:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:14.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:14.489 14:21:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.489 14:21:06 -- common/autotest_common.sh@10 -- # set +x 00:19:14.489 [2024-11-18 14:21:06.527337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:14.489 [2024-11-18 14:21:06.527507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134546 ] 00:19:14.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:14.489 Zero copy mechanism will not be used. 00:19:14.748 [2024-11-18 14:21:06.665093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.748 [2024-11-18 14:21:06.737793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.748 [2024-11-18 14:21:06.807332] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.684 14:21:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.684 14:21:07 -- common/autotest_common.sh@862 -- # return 0 00:19:15.684 14:21:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:15.684 14:21:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:15.684 14:21:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:15.684 BaseBdev1 00:19:15.684 14:21:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:15.684 14:21:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:15.684 14:21:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:15.943 BaseBdev2 00:19:15.943 14:21:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:15.943 14:21:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:15.943 14:21:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:16.202 BaseBdev3 00:19:16.202 14:21:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:16.202 14:21:08 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:16.202 14:21:08 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:16.460 BaseBdev4 00:19:16.460 14:21:08 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:16.719 spare_malloc 00:19:16.720 14:21:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:16.979 spare_delay 00:19:16.979 14:21:08 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:16.979 [2024-11-18 14:21:08.975375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:16.979 [2024-11-18 14:21:08.975496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.979 [2024-11-18 14:21:08.975539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:16.979 [2024-11-18 14:21:08.975587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.979 [2024-11-18 14:21:08.978015] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.979 [2024-11-18 14:21:08.978077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:16.979 spare 00:19:16.979 14:21:08 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:17.237 [2024-11-18 14:21:09.219484] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.237 [2024-11-18 14:21:09.221463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.237 [2024-11-18 14:21:09.221521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:17.237 [2024-11-18 14:21:09.221557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:17.237 [2024-11-18 14:21:09.221636] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:19:17.237 [2024-11-18 14:21:09.221648] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:17.237 [2024-11-18 14:21:09.221798] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:17.237 [2024-11-18 14:21:09.222170] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:19:17.237 [2024-11-18 14:21:09.222191] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:19:17.237 [2024-11-18 14:21:09.222387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.237 14:21:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.496 14:21:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.496 "name": "raid_bdev1", 00:19:17.496 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:17.496 "strip_size_kb": 0, 00:19:17.496 "state": "online", 00:19:17.496 "raid_level": "raid1", 00:19:17.496 "superblock": false, 00:19:17.496 "num_base_bdevs": 4, 00:19:17.496 "num_base_bdevs_discovered": 4, 00:19:17.496 "num_base_bdevs_operational": 4, 00:19:17.496 "base_bdevs_list": [ 00:19:17.496 { 00:19:17.496 "name": "BaseBdev1", 00:19:17.496 "uuid": "850d8e9b-9c27-4666-aae5-552833363881", 00:19:17.496 "is_configured": true, 00:19:17.496 "data_offset": 0, 00:19:17.496 "data_size": 65536 00:19:17.496 }, 00:19:17.496 { 00:19:17.496 "name": "BaseBdev2", 00:19:17.496 "uuid": "f8998d81-1789-4908-b3d3-df2c981e8025", 00:19:17.496 "is_configured": true, 00:19:17.496 "data_offset": 0, 00:19:17.496 "data_size": 65536 00:19:17.496 }, 00:19:17.496 { 00:19:17.496 "name": "BaseBdev3", 00:19:17.496 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:17.496 "is_configured": true, 00:19:17.496 "data_offset": 0, 00:19:17.496 "data_size": 65536 00:19:17.496 }, 00:19:17.496 { 00:19:17.496 "name": "BaseBdev4", 00:19:17.496 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:17.496 "is_configured": true, 00:19:17.496 "data_offset": 0, 00:19:17.496 "data_size": 65536 00:19:17.496 } 00:19:17.496 ] 00:19:17.496 }' 00:19:17.496 14:21:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.496 14:21:09 -- common/autotest_common.sh@10 -- # set +x 00:19:18.063 14:21:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:18.063 14:21:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:18.323 [2024-11-18 14:21:10.151752] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:18.323 14:21:10 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@12 -- # local i 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:18.323 14:21:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:18.582 [2024-11-18 14:21:10.571691] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:18.582 /dev/nbd0 00:19:18.582 14:21:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:18.582 14:21:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:18.582 14:21:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:18.582 14:21:10 -- common/autotest_common.sh@867 -- # local i 00:19:18.582 14:21:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:18.582 14:21:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:18.582 14:21:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:18.582 14:21:10 -- common/autotest_common.sh@871 -- # break 00:19:18.582 14:21:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:18.582 14:21:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:18.582 14:21:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.582 1+0 records in 00:19:18.582 1+0 records out 00:19:18.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296997 s, 13.8 MB/s 00:19:18.582 14:21:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.582 14:21:10 -- common/autotest_common.sh@884 -- # size=4096 00:19:18.582 14:21:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.582 14:21:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:18.582 14:21:10 -- common/autotest_common.sh@887 -- # return 0 00:19:18.582 14:21:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:18.582 14:21:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:18.582 14:21:10 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:18.582 14:21:10 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:18.582 14:21:10 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:23.852 65536+0 records in 00:19:23.852 65536+0 records out 00:19:23.852 33554432 bytes (34 MB, 32 MiB) copied, 5.09328 s, 6.6 MB/s 00:19:23.852 14:21:15 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:23.852 14:21:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:23.852 14:21:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:23.852 14:21:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:23.852 14:21:15 -- bdev/nbd_common.sh@51 -- # local i 00:19:23.852 14:21:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.852 14:21:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:24.110 [2024-11-18 14:21:15.955991] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@41 -- # break 00:19:24.110 14:21:15 -- bdev/nbd_common.sh@45 -- # return 0 00:19:24.110 14:21:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:24.110 [2024-11-18 14:21:16.143539] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.110 14:21:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.111 14:21:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.111 14:21:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.369 14:21:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.369 "name": "raid_bdev1", 00:19:24.369 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:24.369 "strip_size_kb": 0, 00:19:24.369 "state": "online", 00:19:24.369 "raid_level": "raid1", 00:19:24.369 "superblock": false, 00:19:24.369 "num_base_bdevs": 4, 00:19:24.369 "num_base_bdevs_discovered": 3, 00:19:24.369 "num_base_bdevs_operational": 3, 00:19:24.369 "base_bdevs_list": [ 00:19:24.369 { 00:19:24.369 "name": null, 00:19:24.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.369 "is_configured": false, 00:19:24.369 "data_offset": 0, 00:19:24.369 "data_size": 65536 00:19:24.369 }, 00:19:24.369 { 00:19:24.369 "name": "BaseBdev2", 00:19:24.369 "uuid": "f8998d81-1789-4908-b3d3-df2c981e8025", 00:19:24.369 "is_configured": true, 00:19:24.369 "data_offset": 0, 00:19:24.369 "data_size": 65536 00:19:24.369 }, 00:19:24.369 { 00:19:24.369 "name": "BaseBdev3", 00:19:24.369 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:24.369 "is_configured": true, 00:19:24.369 "data_offset": 0, 00:19:24.369 "data_size": 65536 00:19:24.369 }, 00:19:24.369 { 00:19:24.369 "name": "BaseBdev4", 00:19:24.369 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:24.369 "is_configured": true, 00:19:24.369 "data_offset": 0, 00:19:24.369 "data_size": 65536 00:19:24.369 } 00:19:24.369 ] 00:19:24.369 }' 00:19:24.369 14:21:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.369 14:21:16 -- common/autotest_common.sh@10 -- # set +x 00:19:25.305 14:21:17 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:25.305 [2024-11-18 14:21:17.291727] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:25.305 [2024-11-18 14:21:17.291768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.305 [2024-11-18 14:21:17.297255] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:19:25.305 [2024-11-18 14:21:17.299349] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:25.305 14:21:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:26.682 "name": "raid_bdev1", 00:19:26.682 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:26.682 "strip_size_kb": 0, 00:19:26.682 "state": "online", 00:19:26.682 "raid_level": "raid1", 00:19:26.682 "superblock": false, 00:19:26.682 "num_base_bdevs": 4, 00:19:26.682 "num_base_bdevs_discovered": 4, 00:19:26.682 "num_base_bdevs_operational": 4, 00:19:26.682 "process": { 00:19:26.682 "type": "rebuild", 00:19:26.682 "target": "spare", 00:19:26.682 "progress": { 00:19:26.682 "blocks": 24576, 00:19:26.682 "percent": 37 00:19:26.682 } 00:19:26.682 }, 00:19:26.682 "base_bdevs_list": [ 00:19:26.682 { 00:19:26.682 "name": "spare", 00:19:26.682 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:26.682 "is_configured": true, 00:19:26.682 "data_offset": 0, 00:19:26.682 "data_size": 65536 00:19:26.682 }, 00:19:26.682 { 00:19:26.682 "name": "BaseBdev2", 00:19:26.682 "uuid": "f8998d81-1789-4908-b3d3-df2c981e8025", 00:19:26.682 "is_configured": true, 00:19:26.682 "data_offset": 0, 00:19:26.682 "data_size": 65536 00:19:26.682 }, 00:19:26.682 { 00:19:26.682 "name": "BaseBdev3", 00:19:26.682 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:26.682 "is_configured": true, 00:19:26.682 "data_offset": 0, 00:19:26.682 "data_size": 65536 00:19:26.682 }, 00:19:26.682 { 00:19:26.682 "name": "BaseBdev4", 00:19:26.682 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:26.682 "is_configured": true, 00:19:26.682 "data_offset": 0, 00:19:26.682 "data_size": 65536 00:19:26.682 } 00:19:26.682 ] 00:19:26.682 }' 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.682 14:21:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:26.941 [2024-11-18 14:21:18.841583] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.941 [2024-11-18 14:21:18.909437] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:26.941 [2024-11-18 14:21:18.909537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.941 14:21:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.201 14:21:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.201 "name": "raid_bdev1", 00:19:27.201 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:27.201 "strip_size_kb": 0, 00:19:27.201 "state": "online", 00:19:27.201 "raid_level": "raid1", 00:19:27.201 "superblock": false, 00:19:27.201 "num_base_bdevs": 4, 00:19:27.201 "num_base_bdevs_discovered": 3, 00:19:27.201 "num_base_bdevs_operational": 3, 00:19:27.201 "base_bdevs_list": [ 00:19:27.201 { 00:19:27.201 "name": null, 00:19:27.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.201 "is_configured": false, 00:19:27.201 "data_offset": 0, 00:19:27.201 "data_size": 65536 00:19:27.201 }, 00:19:27.201 { 00:19:27.201 "name": "BaseBdev2", 00:19:27.201 "uuid": "f8998d81-1789-4908-b3d3-df2c981e8025", 00:19:27.201 "is_configured": true, 00:19:27.201 "data_offset": 0, 00:19:27.201 "data_size": 65536 00:19:27.201 }, 00:19:27.201 { 00:19:27.201 "name": "BaseBdev3", 00:19:27.201 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:27.201 "is_configured": true, 00:19:27.201 "data_offset": 0, 00:19:27.201 "data_size": 65536 00:19:27.201 }, 00:19:27.201 { 00:19:27.201 "name": "BaseBdev4", 00:19:27.201 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:27.201 "is_configured": true, 00:19:27.201 "data_offset": 0, 00:19:27.201 "data_size": 65536 00:19:27.201 } 00:19:27.201 ] 00:19:27.201 }' 00:19:27.201 14:21:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.201 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:19:27.768 14:21:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.768 14:21:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:27.768 14:21:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:27.768 14:21:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:27.768 14:21:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:27.768 14:21:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.769 14:21:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.028 14:21:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:28.028 "name": "raid_bdev1", 00:19:28.028 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:28.028 "strip_size_kb": 0, 00:19:28.028 "state": "online", 00:19:28.028 "raid_level": "raid1", 00:19:28.028 "superblock": false, 00:19:28.028 "num_base_bdevs": 4, 00:19:28.028 "num_base_bdevs_discovered": 3, 00:19:28.028 "num_base_bdevs_operational": 3, 00:19:28.028 "base_bdevs_list": [ 00:19:28.028 { 00:19:28.028 "name": null, 00:19:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.028 "is_configured": false, 00:19:28.028 "data_offset": 0, 00:19:28.028 "data_size": 65536 00:19:28.028 }, 00:19:28.028 { 00:19:28.028 "name": "BaseBdev2", 00:19:28.028 "uuid": "f8998d81-1789-4908-b3d3-df2c981e8025", 00:19:28.028 "is_configured": true, 00:19:28.028 "data_offset": 0, 00:19:28.028 "data_size": 65536 00:19:28.028 }, 00:19:28.028 { 00:19:28.028 "name": "BaseBdev3", 00:19:28.028 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:28.028 "is_configured": true, 00:19:28.028 "data_offset": 0, 00:19:28.028 "data_size": 65536 00:19:28.028 }, 00:19:28.028 { 00:19:28.028 "name": "BaseBdev4", 00:19:28.028 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:28.028 "is_configured": true, 00:19:28.028 "data_offset": 0, 00:19:28.028 "data_size": 65536 00:19:28.028 } 00:19:28.028 ] 00:19:28.028 }' 00:19:28.028 14:21:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:28.028 14:21:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:28.028 14:21:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:28.288 14:21:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:28.288 14:21:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:28.288 [2024-11-18 14:21:20.317938] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:28.288 [2024-11-18 14:21:20.317972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:28.288 [2024-11-18 14:21:20.319617] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:19:28.288 [2024-11-18 14:21:20.321581] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:28.288 14:21:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:29.664 14:21:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.664 14:21:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:29.664 14:21:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:29.664 14:21:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:29.664 14:21:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:29.665 "name": "raid_bdev1", 00:19:29.665 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:29.665 "strip_size_kb": 0, 00:19:29.665 "state": "online", 00:19:29.665 "raid_level": "raid1", 00:19:29.665 "superblock": false, 00:19:29.665 "num_base_bdevs": 4, 00:19:29.665 "num_base_bdevs_discovered": 4, 00:19:29.665 "num_base_bdevs_operational": 4, 00:19:29.665 "process": { 00:19:29.665 "type": "rebuild", 00:19:29.665 "target": "spare", 00:19:29.665 "progress": { 00:19:29.665 "blocks": 24576, 00:19:29.665 "percent": 37 00:19:29.665 } 00:19:29.665 }, 00:19:29.665 "base_bdevs_list": [ 00:19:29.665 { 00:19:29.665 "name": "spare", 00:19:29.665 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:29.665 "is_configured": true, 00:19:29.665 "data_offset": 0, 00:19:29.665 "data_size": 65536 00:19:29.665 }, 00:19:29.665 { 00:19:29.665 "name": "BaseBdev2", 00:19:29.665 "uuid": "f8998d81-1789-4908-b3d3-df2c981e8025", 00:19:29.665 "is_configured": true, 00:19:29.665 "data_offset": 0, 00:19:29.665 "data_size": 65536 00:19:29.665 }, 00:19:29.665 { 00:19:29.665 "name": "BaseBdev3", 00:19:29.665 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:29.665 "is_configured": true, 00:19:29.665 "data_offset": 0, 00:19:29.665 "data_size": 65536 00:19:29.665 }, 00:19:29.665 { 00:19:29.665 "name": "BaseBdev4", 00:19:29.665 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:29.665 "is_configured": true, 00:19:29.665 "data_offset": 0, 00:19:29.665 "data_size": 65536 00:19:29.665 } 00:19:29.665 ] 00:19:29.665 }' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:19:29.665 14:21:21 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:29.924 [2024-11-18 14:21:21.903549] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.924 [2024-11-18 14:21:21.929088] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06220 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.924 14:21:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:30.183 "name": "raid_bdev1", 00:19:30.183 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:30.183 "strip_size_kb": 0, 00:19:30.183 "state": "online", 00:19:30.183 "raid_level": "raid1", 00:19:30.183 "superblock": false, 00:19:30.183 "num_base_bdevs": 4, 00:19:30.183 "num_base_bdevs_discovered": 3, 00:19:30.183 "num_base_bdevs_operational": 3, 00:19:30.183 "process": { 00:19:30.183 "type": "rebuild", 00:19:30.183 "target": "spare", 00:19:30.183 "progress": { 00:19:30.183 "blocks": 34816, 00:19:30.183 "percent": 53 00:19:30.183 } 00:19:30.183 }, 00:19:30.183 "base_bdevs_list": [ 00:19:30.183 { 00:19:30.183 "name": "spare", 00:19:30.183 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:30.183 "is_configured": true, 00:19:30.183 "data_offset": 0, 00:19:30.183 "data_size": 65536 00:19:30.183 }, 00:19:30.183 { 00:19:30.183 "name": null, 00:19:30.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.183 "is_configured": false, 00:19:30.183 "data_offset": 0, 00:19:30.183 "data_size": 65536 00:19:30.183 }, 00:19:30.183 { 00:19:30.183 "name": "BaseBdev3", 00:19:30.183 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:30.183 "is_configured": true, 00:19:30.183 "data_offset": 0, 00:19:30.183 "data_size": 65536 00:19:30.183 }, 00:19:30.183 { 00:19:30.183 "name": "BaseBdev4", 00:19:30.183 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:30.183 "is_configured": true, 00:19:30.183 "data_offset": 0, 00:19:30.183 "data_size": 65536 00:19:30.183 } 00:19:30.183 ] 00:19:30.183 }' 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@657 -- # local timeout=438 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.183 14:21:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.442 14:21:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:30.442 "name": "raid_bdev1", 00:19:30.442 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:30.442 "strip_size_kb": 0, 00:19:30.442 "state": "online", 00:19:30.442 "raid_level": "raid1", 00:19:30.442 "superblock": false, 00:19:30.442 "num_base_bdevs": 4, 00:19:30.442 "num_base_bdevs_discovered": 3, 00:19:30.442 "num_base_bdevs_operational": 3, 00:19:30.442 "process": { 00:19:30.442 "type": "rebuild", 00:19:30.442 "target": "spare", 00:19:30.442 "progress": { 00:19:30.442 "blocks": 40960, 00:19:30.442 "percent": 62 00:19:30.442 } 00:19:30.442 }, 00:19:30.442 "base_bdevs_list": [ 00:19:30.442 { 00:19:30.442 "name": "spare", 00:19:30.442 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:30.442 "is_configured": true, 00:19:30.442 "data_offset": 0, 00:19:30.442 "data_size": 65536 00:19:30.442 }, 00:19:30.442 { 00:19:30.442 "name": null, 00:19:30.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.442 "is_configured": false, 00:19:30.442 "data_offset": 0, 00:19:30.442 "data_size": 65536 00:19:30.442 }, 00:19:30.442 { 00:19:30.442 "name": "BaseBdev3", 00:19:30.442 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:30.442 "is_configured": true, 00:19:30.442 "data_offset": 0, 00:19:30.442 "data_size": 65536 00:19:30.442 }, 00:19:30.442 { 00:19:30.442 "name": "BaseBdev4", 00:19:30.442 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:30.442 "is_configured": true, 00:19:30.442 "data_offset": 0, 00:19:30.442 "data_size": 65536 00:19:30.442 } 00:19:30.442 ] 00:19:30.442 }' 00:19:30.442 14:21:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:30.442 14:21:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.442 14:21:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:30.701 14:21:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:30.701 14:21:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:31.637 [2024-11-18 14:21:23.537906] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:31.637 [2024-11-18 14:21:23.537976] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:31.637 [2024-11-18 14:21:23.538062] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.637 14:21:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:31.896 "name": "raid_bdev1", 00:19:31.896 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:31.896 "strip_size_kb": 0, 00:19:31.896 "state": "online", 00:19:31.896 "raid_level": "raid1", 00:19:31.896 "superblock": false, 00:19:31.896 "num_base_bdevs": 4, 00:19:31.896 "num_base_bdevs_discovered": 3, 00:19:31.896 "num_base_bdevs_operational": 3, 00:19:31.896 "base_bdevs_list": [ 00:19:31.896 { 00:19:31.896 "name": "spare", 00:19:31.896 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:31.896 "is_configured": true, 00:19:31.896 "data_offset": 0, 00:19:31.896 "data_size": 65536 00:19:31.896 }, 00:19:31.896 { 00:19:31.896 "name": null, 00:19:31.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.896 "is_configured": false, 00:19:31.896 "data_offset": 0, 00:19:31.896 "data_size": 65536 00:19:31.896 }, 00:19:31.896 { 00:19:31.896 "name": "BaseBdev3", 00:19:31.896 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:31.896 "is_configured": true, 00:19:31.896 "data_offset": 0, 00:19:31.896 "data_size": 65536 00:19:31.896 }, 00:19:31.896 { 00:19:31.896 "name": "BaseBdev4", 00:19:31.896 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:31.896 "is_configured": true, 00:19:31.896 "data_offset": 0, 00:19:31.896 "data_size": 65536 00:19:31.896 } 00:19:31.896 ] 00:19:31.896 }' 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@660 -- # break 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.896 14:21:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.155 14:21:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:32.155 "name": "raid_bdev1", 00:19:32.155 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:32.155 "strip_size_kb": 0, 00:19:32.155 "state": "online", 00:19:32.155 "raid_level": "raid1", 00:19:32.155 "superblock": false, 00:19:32.155 "num_base_bdevs": 4, 00:19:32.155 "num_base_bdevs_discovered": 3, 00:19:32.155 "num_base_bdevs_operational": 3, 00:19:32.155 "base_bdevs_list": [ 00:19:32.155 { 00:19:32.155 "name": "spare", 00:19:32.155 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:32.155 "is_configured": true, 00:19:32.155 "data_offset": 0, 00:19:32.155 "data_size": 65536 00:19:32.155 }, 00:19:32.155 { 00:19:32.155 "name": null, 00:19:32.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.155 "is_configured": false, 00:19:32.155 "data_offset": 0, 00:19:32.155 "data_size": 65536 00:19:32.155 }, 00:19:32.155 { 00:19:32.155 "name": "BaseBdev3", 00:19:32.155 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:32.155 "is_configured": true, 00:19:32.155 "data_offset": 0, 00:19:32.155 "data_size": 65536 00:19:32.155 }, 00:19:32.155 { 00:19:32.155 "name": "BaseBdev4", 00:19:32.156 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:32.156 "is_configured": true, 00:19:32.156 "data_offset": 0, 00:19:32.156 "data_size": 65536 00:19:32.156 } 00:19:32.156 ] 00:19:32.156 }' 00:19:32.156 14:21:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:32.156 14:21:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:32.156 14:21:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.415 14:21:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.675 14:21:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.675 "name": "raid_bdev1", 00:19:32.675 "uuid": "b6a825d7-39ad-48e7-b168-437934f1f4dc", 00:19:32.675 "strip_size_kb": 0, 00:19:32.675 "state": "online", 00:19:32.675 "raid_level": "raid1", 00:19:32.675 "superblock": false, 00:19:32.675 "num_base_bdevs": 4, 00:19:32.675 "num_base_bdevs_discovered": 3, 00:19:32.675 "num_base_bdevs_operational": 3, 00:19:32.675 "base_bdevs_list": [ 00:19:32.675 { 00:19:32.675 "name": "spare", 00:19:32.675 "uuid": "ee478d50-5217-55f5-a0dd-6649dfe87c1c", 00:19:32.675 "is_configured": true, 00:19:32.675 "data_offset": 0, 00:19:32.675 "data_size": 65536 00:19:32.675 }, 00:19:32.675 { 00:19:32.675 "name": null, 00:19:32.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.675 "is_configured": false, 00:19:32.675 "data_offset": 0, 00:19:32.675 "data_size": 65536 00:19:32.675 }, 00:19:32.675 { 00:19:32.675 "name": "BaseBdev3", 00:19:32.675 "uuid": "d8becdf7-c7d3-45af-ae45-35af1b734dfa", 00:19:32.675 "is_configured": true, 00:19:32.675 "data_offset": 0, 00:19:32.675 "data_size": 65536 00:19:32.675 }, 00:19:32.675 { 00:19:32.675 "name": "BaseBdev4", 00:19:32.675 "uuid": "89321c5e-96a3-48f9-97f6-4f290ea4feb6", 00:19:32.675 "is_configured": true, 00:19:32.675 "data_offset": 0, 00:19:32.675 "data_size": 65536 00:19:32.675 } 00:19:32.675 ] 00:19:32.675 }' 00:19:32.675 14:21:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.675 14:21:24 -- common/autotest_common.sh@10 -- # set +x 00:19:33.243 14:21:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:33.243 [2024-11-18 14:21:25.257482] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.243 [2024-11-18 14:21:25.257507] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.243 [2024-11-18 14:21:25.257604] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.243 [2024-11-18 14:21:25.257682] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.243 [2024-11-18 14:21:25.257695] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:19:33.243 14:21:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.243 14:21:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:33.502 14:21:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:33.502 14:21:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:33.502 14:21:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@12 -- # local i 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:33.502 14:21:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:33.761 /dev/nbd0 00:19:33.761 14:21:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:33.761 14:21:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:33.761 14:21:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:33.761 14:21:25 -- common/autotest_common.sh@867 -- # local i 00:19:33.761 14:21:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:33.761 14:21:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:33.761 14:21:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:33.761 14:21:25 -- common/autotest_common.sh@871 -- # break 00:19:33.761 14:21:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:33.761 14:21:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:33.761 14:21:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.019 1+0 records in 00:19:34.019 1+0 records out 00:19:34.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418045 s, 9.8 MB/s 00:19:34.019 14:21:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.019 14:21:25 -- common/autotest_common.sh@884 -- # size=4096 00:19:34.019 14:21:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.019 14:21:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:34.019 14:21:25 -- common/autotest_common.sh@887 -- # return 0 00:19:34.019 14:21:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.019 14:21:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:34.019 14:21:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:34.279 /dev/nbd1 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:34.279 14:21:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:34.279 14:21:26 -- common/autotest_common.sh@867 -- # local i 00:19:34.279 14:21:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:34.279 14:21:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:34.279 14:21:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:34.279 14:21:26 -- common/autotest_common.sh@871 -- # break 00:19:34.279 14:21:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:34.279 14:21:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:34.279 14:21:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.279 1+0 records in 00:19:34.279 1+0 records out 00:19:34.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055953 s, 7.3 MB/s 00:19:34.279 14:21:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.279 14:21:26 -- common/autotest_common.sh@884 -- # size=4096 00:19:34.279 14:21:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.279 14:21:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:34.279 14:21:26 -- common/autotest_common.sh@887 -- # return 0 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:34.279 14:21:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:34.279 14:21:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@51 -- # local i 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.279 14:21:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@41 -- # break 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:34.538 14:21:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@41 -- # break 00:19:34.796 14:21:26 -- bdev/nbd_common.sh@45 -- # return 0 00:19:34.796 14:21:26 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:34.796 14:21:26 -- bdev/bdev_raid.sh@709 -- # killprocess 134546 00:19:34.796 14:21:26 -- common/autotest_common.sh@936 -- # '[' -z 134546 ']' 00:19:34.796 14:21:26 -- common/autotest_common.sh@940 -- # kill -0 134546 00:19:34.796 14:21:26 -- common/autotest_common.sh@941 -- # uname 00:19:34.796 14:21:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.796 14:21:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134546 00:19:34.796 14:21:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:34.796 14:21:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:34.796 killing process with pid 134546 00:19:34.796 14:21:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134546' 00:19:34.796 14:21:26 -- common/autotest_common.sh@955 -- # kill 134546 00:19:34.796 Received shutdown signal, test time was about 60.000000 seconds 00:19:34.796 00:19:34.796 Latency(us) 00:19:34.796 [2024-11-18T14:21:26.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.796 [2024-11-18T14:21:26.870Z] =================================================================================================================== 00:19:34.796 [2024-11-18T14:21:26.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.796 [2024-11-18 14:21:26.692966] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:34.796 14:21:26 -- common/autotest_common.sh@960 -- # wait 134546 00:19:34.797 [2024-11-18 14:21:26.745898] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:35.055 00:19:35.055 real 0m20.574s 00:19:35.055 user 0m29.403s 00:19:35.055 sys 0m3.288s 00:19:35.055 14:21:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:35.055 ************************************ 00:19:35.055 END TEST raid_rebuild_test 00:19:35.055 ************************************ 00:19:35.055 14:21:27 -- common/autotest_common.sh@10 -- # set +x 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:19:35.055 14:21:27 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:35.055 14:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:35.055 14:21:27 -- common/autotest_common.sh@10 -- # set +x 00:19:35.055 ************************************ 00:19:35.055 START TEST raid_rebuild_test_sb 00:19:35.055 ************************************ 00:19:35.055 14:21:27 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:35.055 14:21:27 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:35.056 14:21:27 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:35.056 14:21:27 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:35.056 14:21:27 -- bdev/bdev_raid.sh@544 -- # raid_pid=135077 00:19:35.056 14:21:27 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135077 /var/tmp/spdk-raid.sock 00:19:35.056 14:21:27 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:35.056 14:21:27 -- common/autotest_common.sh@829 -- # '[' -z 135077 ']' 00:19:35.056 14:21:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:35.056 14:21:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:35.056 14:21:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:35.056 14:21:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.056 14:21:27 -- common/autotest_common.sh@10 -- # set +x 00:19:35.314 [2024-11-18 14:21:27.170373] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:35.314 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:35.314 Zero copy mechanism will not be used. 00:19:35.314 [2024-11-18 14:21:27.170588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135077 ] 00:19:35.314 [2024-11-18 14:21:27.308625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.314 [2024-11-18 14:21:27.376754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.573 [2024-11-18 14:21:27.446705] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.141 14:21:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.141 14:21:28 -- common/autotest_common.sh@862 -- # return 0 00:19:36.141 14:21:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:36.141 14:21:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:36.141 14:21:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:36.400 BaseBdev1_malloc 00:19:36.400 14:21:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:36.659 [2024-11-18 14:21:28.534920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:36.659 [2024-11-18 14:21:28.535046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.659 [2024-11-18 14:21:28.535090] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:36.659 [2024-11-18 14:21:28.535137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.659 [2024-11-18 14:21:28.537525] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.659 [2024-11-18 14:21:28.537585] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:36.659 BaseBdev1 00:19:36.659 14:21:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:36.659 14:21:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:36.659 14:21:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:36.918 BaseBdev2_malloc 00:19:36.918 14:21:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:36.918 [2024-11-18 14:21:28.964533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:36.918 [2024-11-18 14:21:28.964596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.918 [2024-11-18 14:21:28.964631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:36.918 [2024-11-18 14:21:28.964673] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.918 [2024-11-18 14:21:28.966860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.918 [2024-11-18 14:21:28.966909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:36.918 BaseBdev2 00:19:36.918 14:21:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:36.918 14:21:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:36.918 14:21:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:37.176 BaseBdev3_malloc 00:19:37.177 14:21:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:37.434 [2024-11-18 14:21:29.413896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:37.434 [2024-11-18 14:21:29.413956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.434 [2024-11-18 14:21:29.413992] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:37.434 [2024-11-18 14:21:29.414035] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.434 [2024-11-18 14:21:29.416263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.434 [2024-11-18 14:21:29.416317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:37.434 BaseBdev3 00:19:37.434 14:21:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:37.434 14:21:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:37.434 14:21:29 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:37.714 BaseBdev4_malloc 00:19:37.714 14:21:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:38.021 [2024-11-18 14:21:29.799400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:38.021 [2024-11-18 14:21:29.799472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.021 [2024-11-18 14:21:29.799504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:38.021 [2024-11-18 14:21:29.799545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.021 [2024-11-18 14:21:29.801770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.021 [2024-11-18 14:21:29.801846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:38.021 BaseBdev4 00:19:38.021 14:21:29 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:38.021 spare_malloc 00:19:38.021 14:21:30 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:38.291 spare_delay 00:19:38.291 14:21:30 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:38.549 [2024-11-18 14:21:30.472966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:38.549 [2024-11-18 14:21:30.473035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.549 [2024-11-18 14:21:30.473067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:38.549 [2024-11-18 14:21:30.473107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.549 [2024-11-18 14:21:30.475418] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.549 [2024-11-18 14:21:30.475472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:38.549 spare 00:19:38.549 14:21:30 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:19:38.808 [2024-11-18 14:21:30.661077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.808 [2024-11-18 14:21:30.663072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.808 [2024-11-18 14:21:30.663146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.808 [2024-11-18 14:21:30.663214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:38.808 [2024-11-18 14:21:30.663433] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:38.808 [2024-11-18 14:21:30.663450] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:38.808 [2024-11-18 14:21:30.663580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:19:38.808 [2024-11-18 14:21:30.663956] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:38.808 [2024-11-18 14:21:30.663977] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:38.808 [2024-11-18 14:21:30.664103] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.808 "name": "raid_bdev1", 00:19:38.808 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:38.808 "strip_size_kb": 0, 00:19:38.808 "state": "online", 00:19:38.808 "raid_level": "raid1", 00:19:38.808 "superblock": true, 00:19:38.808 "num_base_bdevs": 4, 00:19:38.808 "num_base_bdevs_discovered": 4, 00:19:38.808 "num_base_bdevs_operational": 4, 00:19:38.808 "base_bdevs_list": [ 00:19:38.808 { 00:19:38.808 "name": "BaseBdev1", 00:19:38.808 "uuid": "1a0b6efd-e04a-538d-9016-6c0cfb042655", 00:19:38.808 "is_configured": true, 00:19:38.808 "data_offset": 2048, 00:19:38.808 "data_size": 63488 00:19:38.808 }, 00:19:38.808 { 00:19:38.808 "name": "BaseBdev2", 00:19:38.808 "uuid": "d1f4d9ff-a146-57e9-ac00-71efc735382c", 00:19:38.808 "is_configured": true, 00:19:38.808 "data_offset": 2048, 00:19:38.808 "data_size": 63488 00:19:38.808 }, 00:19:38.808 { 00:19:38.808 "name": "BaseBdev3", 00:19:38.808 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:38.808 "is_configured": true, 00:19:38.808 "data_offset": 2048, 00:19:38.808 "data_size": 63488 00:19:38.808 }, 00:19:38.808 { 00:19:38.808 "name": "BaseBdev4", 00:19:38.808 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:38.808 "is_configured": true, 00:19:38.808 "data_offset": 2048, 00:19:38.808 "data_size": 63488 00:19:38.808 } 00:19:38.808 ] 00:19:38.808 }' 00:19:38.808 14:21:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.808 14:21:30 -- common/autotest_common.sh@10 -- # set +x 00:19:39.743 14:21:31 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:39.743 14:21:31 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:39.743 [2024-11-18 14:21:31.641386] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.743 14:21:31 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:39.743 14:21:31 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.743 14:21:31 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:40.002 14:21:31 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:40.002 14:21:31 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:40.002 14:21:31 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:40.002 14:21:31 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@12 -- # local i 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.002 14:21:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:40.261 [2024-11-18 14:21:32.161317] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:19:40.261 /dev/nbd0 00:19:40.261 14:21:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.261 14:21:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.261 14:21:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:40.261 14:21:32 -- common/autotest_common.sh@867 -- # local i 00:19:40.261 14:21:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:40.261 14:21:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:40.262 14:21:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:40.262 14:21:32 -- common/autotest_common.sh@871 -- # break 00:19:40.262 14:21:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:40.262 14:21:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:40.262 14:21:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.262 1+0 records in 00:19:40.262 1+0 records out 00:19:40.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319691 s, 12.8 MB/s 00:19:40.262 14:21:32 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.262 14:21:32 -- common/autotest_common.sh@884 -- # size=4096 00:19:40.262 14:21:32 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.262 14:21:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:40.262 14:21:32 -- common/autotest_common.sh@887 -- # return 0 00:19:40.262 14:21:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.262 14:21:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.262 14:21:32 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:40.262 14:21:32 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:40.262 14:21:32 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:46.828 63488+0 records in 00:19:46.828 63488+0 records out 00:19:46.828 32505856 bytes (33 MB, 31 MiB) copied, 5.61671 s, 5.8 MB/s 00:19:46.828 14:21:37 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:46.828 14:21:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:46.828 14:21:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:46.828 14:21:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:46.828 14:21:37 -- bdev/nbd_common.sh@51 -- # local i 00:19:46.828 14:21:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:46.828 14:21:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@41 -- # break 00:19:46.828 [2024-11-18 14:21:38.096811] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.828 14:21:38 -- bdev/nbd_common.sh@45 -- # return 0 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:46.828 [2024-11-18 14:21:38.336515] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.828 14:21:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.829 14:21:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.829 14:21:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.829 14:21:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.829 14:21:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.829 "name": "raid_bdev1", 00:19:46.829 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:46.829 "strip_size_kb": 0, 00:19:46.829 "state": "online", 00:19:46.829 "raid_level": "raid1", 00:19:46.829 "superblock": true, 00:19:46.829 "num_base_bdevs": 4, 00:19:46.829 "num_base_bdevs_discovered": 3, 00:19:46.829 "num_base_bdevs_operational": 3, 00:19:46.829 "base_bdevs_list": [ 00:19:46.829 { 00:19:46.829 "name": null, 00:19:46.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.829 "is_configured": false, 00:19:46.829 "data_offset": 2048, 00:19:46.829 "data_size": 63488 00:19:46.829 }, 00:19:46.829 { 00:19:46.829 "name": "BaseBdev2", 00:19:46.829 "uuid": "d1f4d9ff-a146-57e9-ac00-71efc735382c", 00:19:46.829 "is_configured": true, 00:19:46.829 "data_offset": 2048, 00:19:46.829 "data_size": 63488 00:19:46.829 }, 00:19:46.829 { 00:19:46.829 "name": "BaseBdev3", 00:19:46.829 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:46.829 "is_configured": true, 00:19:46.829 "data_offset": 2048, 00:19:46.829 "data_size": 63488 00:19:46.829 }, 00:19:46.829 { 00:19:46.829 "name": "BaseBdev4", 00:19:46.829 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:46.829 "is_configured": true, 00:19:46.829 "data_offset": 2048, 00:19:46.829 "data_size": 63488 00:19:46.829 } 00:19:46.829 ] 00:19:46.829 }' 00:19:46.829 14:21:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.829 14:21:38 -- common/autotest_common.sh@10 -- # set +x 00:19:47.087 14:21:39 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.346 [2024-11-18 14:21:39.348698] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:47.346 [2024-11-18 14:21:39.348746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.346 [2024-11-18 14:21:39.354192] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:19:47.346 [2024-11-18 14:21:39.356347] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.346 14:21:39 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.721 14:21:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:48.721 "name": "raid_bdev1", 00:19:48.721 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:48.721 "strip_size_kb": 0, 00:19:48.721 "state": "online", 00:19:48.721 "raid_level": "raid1", 00:19:48.722 "superblock": true, 00:19:48.722 "num_base_bdevs": 4, 00:19:48.722 "num_base_bdevs_discovered": 4, 00:19:48.722 "num_base_bdevs_operational": 4, 00:19:48.722 "process": { 00:19:48.722 "type": "rebuild", 00:19:48.722 "target": "spare", 00:19:48.722 "progress": { 00:19:48.722 "blocks": 24576, 00:19:48.722 "percent": 38 00:19:48.722 } 00:19:48.722 }, 00:19:48.722 "base_bdevs_list": [ 00:19:48.722 { 00:19:48.722 "name": "spare", 00:19:48.722 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:48.722 "is_configured": true, 00:19:48.722 "data_offset": 2048, 00:19:48.722 "data_size": 63488 00:19:48.722 }, 00:19:48.722 { 00:19:48.722 "name": "BaseBdev2", 00:19:48.722 "uuid": "d1f4d9ff-a146-57e9-ac00-71efc735382c", 00:19:48.722 "is_configured": true, 00:19:48.722 "data_offset": 2048, 00:19:48.722 "data_size": 63488 00:19:48.722 }, 00:19:48.722 { 00:19:48.722 "name": "BaseBdev3", 00:19:48.722 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:48.722 "is_configured": true, 00:19:48.722 "data_offset": 2048, 00:19:48.722 "data_size": 63488 00:19:48.722 }, 00:19:48.722 { 00:19:48.722 "name": "BaseBdev4", 00:19:48.722 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:48.722 "is_configured": true, 00:19:48.722 "data_offset": 2048, 00:19:48.722 "data_size": 63488 00:19:48.722 } 00:19:48.722 ] 00:19:48.722 }' 00:19:48.722 14:21:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:48.722 14:21:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.722 14:21:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:48.722 14:21:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.722 14:21:40 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:48.981 [2024-11-18 14:21:40.926679] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:48.981 [2024-11-18 14:21:40.966513] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:48.981 [2024-11-18 14:21:40.966608] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.981 14:21:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.240 14:21:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.240 "name": "raid_bdev1", 00:19:49.240 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:49.240 "strip_size_kb": 0, 00:19:49.240 "state": "online", 00:19:49.240 "raid_level": "raid1", 00:19:49.240 "superblock": true, 00:19:49.240 "num_base_bdevs": 4, 00:19:49.240 "num_base_bdevs_discovered": 3, 00:19:49.240 "num_base_bdevs_operational": 3, 00:19:49.240 "base_bdevs_list": [ 00:19:49.240 { 00:19:49.240 "name": null, 00:19:49.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.240 "is_configured": false, 00:19:49.240 "data_offset": 2048, 00:19:49.240 "data_size": 63488 00:19:49.240 }, 00:19:49.240 { 00:19:49.240 "name": "BaseBdev2", 00:19:49.240 "uuid": "d1f4d9ff-a146-57e9-ac00-71efc735382c", 00:19:49.240 "is_configured": true, 00:19:49.240 "data_offset": 2048, 00:19:49.240 "data_size": 63488 00:19:49.240 }, 00:19:49.240 { 00:19:49.240 "name": "BaseBdev3", 00:19:49.240 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:49.240 "is_configured": true, 00:19:49.240 "data_offset": 2048, 00:19:49.240 "data_size": 63488 00:19:49.240 }, 00:19:49.240 { 00:19:49.240 "name": "BaseBdev4", 00:19:49.240 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:49.240 "is_configured": true, 00:19:49.240 "data_offset": 2048, 00:19:49.240 "data_size": 63488 00:19:49.240 } 00:19:49.240 ] 00:19:49.240 }' 00:19:49.240 14:21:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.240 14:21:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.808 14:21:41 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:49.808 14:21:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:49.808 14:21:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:49.809 14:21:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:49.809 14:21:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:49.809 14:21:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.809 14:21:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.067 14:21:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:50.067 "name": "raid_bdev1", 00:19:50.067 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:50.067 "strip_size_kb": 0, 00:19:50.067 "state": "online", 00:19:50.067 "raid_level": "raid1", 00:19:50.067 "superblock": true, 00:19:50.067 "num_base_bdevs": 4, 00:19:50.067 "num_base_bdevs_discovered": 3, 00:19:50.067 "num_base_bdevs_operational": 3, 00:19:50.067 "base_bdevs_list": [ 00:19:50.067 { 00:19:50.067 "name": null, 00:19:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.067 "is_configured": false, 00:19:50.067 "data_offset": 2048, 00:19:50.067 "data_size": 63488 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "name": "BaseBdev2", 00:19:50.067 "uuid": "d1f4d9ff-a146-57e9-ac00-71efc735382c", 00:19:50.067 "is_configured": true, 00:19:50.067 "data_offset": 2048, 00:19:50.067 "data_size": 63488 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "name": "BaseBdev3", 00:19:50.067 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:50.067 "is_configured": true, 00:19:50.067 "data_offset": 2048, 00:19:50.067 "data_size": 63488 00:19:50.067 }, 00:19:50.067 { 00:19:50.067 "name": "BaseBdev4", 00:19:50.067 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:50.067 "is_configured": true, 00:19:50.067 "data_offset": 2048, 00:19:50.067 "data_size": 63488 00:19:50.067 } 00:19:50.067 ] 00:19:50.067 }' 00:19:50.067 14:21:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:50.326 14:21:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:50.326 14:21:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:50.326 14:21:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:50.326 14:21:42 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.326 [2024-11-18 14:21:42.374959] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:50.326 [2024-11-18 14:21:42.374994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.326 [2024-11-18 14:21:42.376629] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:19:50.326 [2024-11-18 14:21:42.378603] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:50.326 14:21:42 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:51.702 "name": "raid_bdev1", 00:19:51.702 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:51.702 "strip_size_kb": 0, 00:19:51.702 "state": "online", 00:19:51.702 "raid_level": "raid1", 00:19:51.702 "superblock": true, 00:19:51.702 "num_base_bdevs": 4, 00:19:51.702 "num_base_bdevs_discovered": 4, 00:19:51.702 "num_base_bdevs_operational": 4, 00:19:51.702 "process": { 00:19:51.702 "type": "rebuild", 00:19:51.702 "target": "spare", 00:19:51.702 "progress": { 00:19:51.702 "blocks": 24576, 00:19:51.702 "percent": 38 00:19:51.702 } 00:19:51.702 }, 00:19:51.702 "base_bdevs_list": [ 00:19:51.702 { 00:19:51.702 "name": "spare", 00:19:51.702 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:51.702 "is_configured": true, 00:19:51.702 "data_offset": 2048, 00:19:51.702 "data_size": 63488 00:19:51.702 }, 00:19:51.702 { 00:19:51.702 "name": "BaseBdev2", 00:19:51.702 "uuid": "d1f4d9ff-a146-57e9-ac00-71efc735382c", 00:19:51.702 "is_configured": true, 00:19:51.702 "data_offset": 2048, 00:19:51.702 "data_size": 63488 00:19:51.702 }, 00:19:51.702 { 00:19:51.702 "name": "BaseBdev3", 00:19:51.702 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:51.702 "is_configured": true, 00:19:51.702 "data_offset": 2048, 00:19:51.702 "data_size": 63488 00:19:51.702 }, 00:19:51.702 { 00:19:51.702 "name": "BaseBdev4", 00:19:51.702 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:51.702 "is_configured": true, 00:19:51.702 "data_offset": 2048, 00:19:51.702 "data_size": 63488 00:19:51.702 } 00:19:51.702 ] 00:19:51.702 }' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:51.702 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:19:51.702 14:21:43 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:51.961 [2024-11-18 14:21:43.975969] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:51.961 [2024-11-18 14:21:43.985966] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:19:52.219 14:21:44 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:19:52.219 14:21:44 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:19:52.219 14:21:44 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.220 "name": "raid_bdev1", 00:19:52.220 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:52.220 "strip_size_kb": 0, 00:19:52.220 "state": "online", 00:19:52.220 "raid_level": "raid1", 00:19:52.220 "superblock": true, 00:19:52.220 "num_base_bdevs": 4, 00:19:52.220 "num_base_bdevs_discovered": 3, 00:19:52.220 "num_base_bdevs_operational": 3, 00:19:52.220 "process": { 00:19:52.220 "type": "rebuild", 00:19:52.220 "target": "spare", 00:19:52.220 "progress": { 00:19:52.220 "blocks": 36864, 00:19:52.220 "percent": 58 00:19:52.220 } 00:19:52.220 }, 00:19:52.220 "base_bdevs_list": [ 00:19:52.220 { 00:19:52.220 "name": "spare", 00:19:52.220 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:52.220 "is_configured": true, 00:19:52.220 "data_offset": 2048, 00:19:52.220 "data_size": 63488 00:19:52.220 }, 00:19:52.220 { 00:19:52.220 "name": null, 00:19:52.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.220 "is_configured": false, 00:19:52.220 "data_offset": 2048, 00:19:52.220 "data_size": 63488 00:19:52.220 }, 00:19:52.220 { 00:19:52.220 "name": "BaseBdev3", 00:19:52.220 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:52.220 "is_configured": true, 00:19:52.220 "data_offset": 2048, 00:19:52.220 "data_size": 63488 00:19:52.220 }, 00:19:52.220 { 00:19:52.220 "name": "BaseBdev4", 00:19:52.220 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:52.220 "is_configured": true, 00:19:52.220 "data_offset": 2048, 00:19:52.220 "data_size": 63488 00:19:52.220 } 00:19:52.220 ] 00:19:52.220 }' 00:19:52.220 14:21:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@657 -- # local timeout=460 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.479 14:21:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.737 14:21:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.737 "name": "raid_bdev1", 00:19:52.737 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:52.737 "strip_size_kb": 0, 00:19:52.737 "state": "online", 00:19:52.737 "raid_level": "raid1", 00:19:52.737 "superblock": true, 00:19:52.737 "num_base_bdevs": 4, 00:19:52.737 "num_base_bdevs_discovered": 3, 00:19:52.737 "num_base_bdevs_operational": 3, 00:19:52.737 "process": { 00:19:52.737 "type": "rebuild", 00:19:52.737 "target": "spare", 00:19:52.737 "progress": { 00:19:52.737 "blocks": 43008, 00:19:52.737 "percent": 67 00:19:52.737 } 00:19:52.737 }, 00:19:52.737 "base_bdevs_list": [ 00:19:52.737 { 00:19:52.737 "name": "spare", 00:19:52.737 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:52.737 "is_configured": true, 00:19:52.737 "data_offset": 2048, 00:19:52.737 "data_size": 63488 00:19:52.737 }, 00:19:52.737 { 00:19:52.737 "name": null, 00:19:52.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.737 "is_configured": false, 00:19:52.737 "data_offset": 2048, 00:19:52.737 "data_size": 63488 00:19:52.737 }, 00:19:52.737 { 00:19:52.737 "name": "BaseBdev3", 00:19:52.737 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:52.737 "is_configured": true, 00:19:52.737 "data_offset": 2048, 00:19:52.737 "data_size": 63488 00:19:52.737 }, 00:19:52.737 { 00:19:52.737 "name": "BaseBdev4", 00:19:52.737 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:52.737 "is_configured": true, 00:19:52.737 "data_offset": 2048, 00:19:52.737 "data_size": 63488 00:19:52.737 } 00:19:52.737 ] 00:19:52.737 }' 00:19:52.737 14:21:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.737 14:21:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.737 14:21:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.738 14:21:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.738 14:21:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:53.674 [2024-11-18 14:21:45.494105] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:53.674 [2024-11-18 14:21:45.494176] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:53.674 [2024-11-18 14:21:45.494334] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.674 14:21:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.933 14:21:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:53.933 "name": "raid_bdev1", 00:19:53.933 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:53.933 "strip_size_kb": 0, 00:19:53.933 "state": "online", 00:19:53.933 "raid_level": "raid1", 00:19:53.933 "superblock": true, 00:19:53.933 "num_base_bdevs": 4, 00:19:53.933 "num_base_bdevs_discovered": 3, 00:19:53.933 "num_base_bdevs_operational": 3, 00:19:53.933 "base_bdevs_list": [ 00:19:53.933 { 00:19:53.933 "name": "spare", 00:19:53.933 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:53.933 "is_configured": true, 00:19:53.933 "data_offset": 2048, 00:19:53.933 "data_size": 63488 00:19:53.933 }, 00:19:53.933 { 00:19:53.933 "name": null, 00:19:53.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.933 "is_configured": false, 00:19:53.933 "data_offset": 2048, 00:19:53.933 "data_size": 63488 00:19:53.933 }, 00:19:53.933 { 00:19:53.933 "name": "BaseBdev3", 00:19:53.933 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:53.933 "is_configured": true, 00:19:53.933 "data_offset": 2048, 00:19:53.933 "data_size": 63488 00:19:53.933 }, 00:19:53.933 { 00:19:53.933 "name": "BaseBdev4", 00:19:53.933 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:53.933 "is_configured": true, 00:19:53.933 "data_offset": 2048, 00:19:53.933 "data_size": 63488 00:19:53.933 } 00:19:53.933 ] 00:19:53.933 }' 00:19:53.933 14:21:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:53.933 14:21:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:53.933 14:21:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@660 -- # break 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.192 14:21:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:54.192 "name": "raid_bdev1", 00:19:54.192 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:54.192 "strip_size_kb": 0, 00:19:54.192 "state": "online", 00:19:54.192 "raid_level": "raid1", 00:19:54.192 "superblock": true, 00:19:54.192 "num_base_bdevs": 4, 00:19:54.192 "num_base_bdevs_discovered": 3, 00:19:54.192 "num_base_bdevs_operational": 3, 00:19:54.192 "base_bdevs_list": [ 00:19:54.192 { 00:19:54.193 "name": "spare", 00:19:54.193 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:54.193 "is_configured": true, 00:19:54.193 "data_offset": 2048, 00:19:54.193 "data_size": 63488 00:19:54.193 }, 00:19:54.193 { 00:19:54.193 "name": null, 00:19:54.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.193 "is_configured": false, 00:19:54.193 "data_offset": 2048, 00:19:54.193 "data_size": 63488 00:19:54.193 }, 00:19:54.193 { 00:19:54.193 "name": "BaseBdev3", 00:19:54.193 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:54.193 "is_configured": true, 00:19:54.193 "data_offset": 2048, 00:19:54.193 "data_size": 63488 00:19:54.193 }, 00:19:54.193 { 00:19:54.193 "name": "BaseBdev4", 00:19:54.193 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:54.193 "is_configured": true, 00:19:54.193 "data_offset": 2048, 00:19:54.193 "data_size": 63488 00:19:54.193 } 00:19:54.193 ] 00:19:54.193 }' 00:19:54.193 14:21:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.451 14:21:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.710 14:21:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.710 "name": "raid_bdev1", 00:19:54.710 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:54.710 "strip_size_kb": 0, 00:19:54.710 "state": "online", 00:19:54.710 "raid_level": "raid1", 00:19:54.710 "superblock": true, 00:19:54.710 "num_base_bdevs": 4, 00:19:54.710 "num_base_bdevs_discovered": 3, 00:19:54.710 "num_base_bdevs_operational": 3, 00:19:54.710 "base_bdevs_list": [ 00:19:54.710 { 00:19:54.710 "name": "spare", 00:19:54.710 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:54.710 "is_configured": true, 00:19:54.710 "data_offset": 2048, 00:19:54.710 "data_size": 63488 00:19:54.710 }, 00:19:54.710 { 00:19:54.710 "name": null, 00:19:54.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.710 "is_configured": false, 00:19:54.710 "data_offset": 2048, 00:19:54.710 "data_size": 63488 00:19:54.710 }, 00:19:54.710 { 00:19:54.710 "name": "BaseBdev3", 00:19:54.710 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:54.710 "is_configured": true, 00:19:54.710 "data_offset": 2048, 00:19:54.710 "data_size": 63488 00:19:54.710 }, 00:19:54.710 { 00:19:54.710 "name": "BaseBdev4", 00:19:54.710 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:54.710 "is_configured": true, 00:19:54.711 "data_offset": 2048, 00:19:54.711 "data_size": 63488 00:19:54.711 } 00:19:54.711 ] 00:19:54.711 }' 00:19:54.711 14:21:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.711 14:21:46 -- common/autotest_common.sh@10 -- # set +x 00:19:55.278 14:21:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:55.536 [2024-11-18 14:21:47.439478] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:55.536 [2024-11-18 14:21:47.439504] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:55.536 [2024-11-18 14:21:47.439591] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:55.536 [2024-11-18 14:21:47.439685] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:55.536 [2024-11-18 14:21:47.439699] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:55.536 14:21:47 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:55.536 14:21:47 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.795 14:21:47 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:55.795 14:21:47 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:55.795 14:21:47 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@12 -- # local i 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:55.795 14:21:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:56.054 /dev/nbd0 00:19:56.054 14:21:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.054 14:21:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.054 14:21:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:56.054 14:21:47 -- common/autotest_common.sh@867 -- # local i 00:19:56.054 14:21:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:56.054 14:21:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:56.054 14:21:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:56.054 14:21:47 -- common/autotest_common.sh@871 -- # break 00:19:56.054 14:21:47 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:56.054 14:21:47 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:56.054 14:21:47 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.054 1+0 records in 00:19:56.054 1+0 records out 00:19:56.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345322 s, 11.9 MB/s 00:19:56.054 14:21:47 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.054 14:21:48 -- common/autotest_common.sh@884 -- # size=4096 00:19:56.054 14:21:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.054 14:21:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:56.054 14:21:48 -- common/autotest_common.sh@887 -- # return 0 00:19:56.054 14:21:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.054 14:21:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.054 14:21:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:56.313 /dev/nbd1 00:19:56.313 14:21:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.313 14:21:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.313 14:21:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:56.313 14:21:48 -- common/autotest_common.sh@867 -- # local i 00:19:56.313 14:21:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:56.313 14:21:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:56.313 14:21:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:56.313 14:21:48 -- common/autotest_common.sh@871 -- # break 00:19:56.313 14:21:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:56.313 14:21:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:56.313 14:21:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.313 1+0 records in 00:19:56.313 1+0 records out 00:19:56.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552112 s, 7.4 MB/s 00:19:56.313 14:21:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.313 14:21:48 -- common/autotest_common.sh@884 -- # size=4096 00:19:56.313 14:21:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.313 14:21:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:56.313 14:21:48 -- common/autotest_common.sh@887 -- # return 0 00:19:56.313 14:21:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.313 14:21:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.313 14:21:48 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:56.573 14:21:48 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@51 -- # local i 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@41 -- # break 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@45 -- # return 0 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:56.573 14:21:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@41 -- # break 00:19:56.831 14:21:48 -- bdev/nbd_common.sh@45 -- # return 0 00:19:56.831 14:21:48 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:56.831 14:21:48 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:56.831 14:21:48 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:56.831 14:21:48 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:57.090 14:21:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:57.348 [2024-11-18 14:21:49.356597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:57.348 [2024-11-18 14:21:49.356675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.348 [2024-11-18 14:21:49.356719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:57.348 [2024-11-18 14:21:49.356742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.348 [2024-11-18 14:21:49.358752] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.348 [2024-11-18 14:21:49.358812] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:57.348 [2024-11-18 14:21:49.358890] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:57.348 [2024-11-18 14:21:49.358957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.348 BaseBdev1 00:19:57.348 14:21:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:57.348 14:21:49 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:19:57.348 14:21:49 -- bdev/bdev_raid.sh@696 -- # continue 00:19:57.348 14:21:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:57.348 14:21:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:19:57.348 14:21:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:19:57.606 14:21:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:57.865 [2024-11-18 14:21:49.728309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:57.865 [2024-11-18 14:21:49.728371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.865 [2024-11-18 14:21:49.728406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:57.865 [2024-11-18 14:21:49.728430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.865 [2024-11-18 14:21:49.728753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.866 [2024-11-18 14:21:49.728808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:57.866 [2024-11-18 14:21:49.728867] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:19:57.866 [2024-11-18 14:21:49.728881] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:19:57.866 [2024-11-18 14:21:49.728888] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.866 [2024-11-18 14:21:49.728911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:19:57.866 [2024-11-18 14:21:49.728949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.866 BaseBdev3 00:19:57.866 14:21:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:57.866 14:21:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:19:57.866 14:21:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:19:58.125 14:21:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:58.125 [2024-11-18 14:21:50.168364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:58.125 [2024-11-18 14:21:50.168424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.125 [2024-11-18 14:21:50.168458] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:58.125 [2024-11-18 14:21:50.168482] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.125 [2024-11-18 14:21:50.168779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.125 [2024-11-18 14:21:50.168831] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:58.125 [2024-11-18 14:21:50.168889] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:19:58.125 [2024-11-18 14:21:50.168917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:58.125 BaseBdev4 00:19:58.125 14:21:50 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:58.384 14:21:50 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:58.643 [2024-11-18 14:21:50.532438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:58.643 [2024-11-18 14:21:50.532499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.643 [2024-11-18 14:21:50.532527] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:58.643 [2024-11-18 14:21:50.532551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.643 [2024-11-18 14:21:50.532879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.643 [2024-11-18 14:21:50.532942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:58.643 [2024-11-18 14:21:50.533009] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:58.643 [2024-11-18 14:21:50.533045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.643 spare 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.643 14:21:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.643 [2024-11-18 14:21:50.633147] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:19:58.643 [2024-11-18 14:21:50.633168] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:58.643 [2024-11-18 14:21:50.633292] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf0b0 00:19:58.643 [2024-11-18 14:21:50.633641] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:19:58.643 [2024-11-18 14:21:50.633661] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:19:58.643 [2024-11-18 14:21:50.633769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.902 14:21:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.902 "name": "raid_bdev1", 00:19:58.902 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:58.902 "strip_size_kb": 0, 00:19:58.902 "state": "online", 00:19:58.902 "raid_level": "raid1", 00:19:58.902 "superblock": true, 00:19:58.902 "num_base_bdevs": 4, 00:19:58.902 "num_base_bdevs_discovered": 3, 00:19:58.902 "num_base_bdevs_operational": 3, 00:19:58.902 "base_bdevs_list": [ 00:19:58.902 { 00:19:58.902 "name": "spare", 00:19:58.902 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:58.902 "is_configured": true, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 }, 00:19:58.902 { 00:19:58.902 "name": null, 00:19:58.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.902 "is_configured": false, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 }, 00:19:58.902 { 00:19:58.902 "name": "BaseBdev3", 00:19:58.902 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:58.902 "is_configured": true, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 }, 00:19:58.902 { 00:19:58.902 "name": "BaseBdev4", 00:19:58.902 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:58.902 "is_configured": true, 00:19:58.902 "data_offset": 2048, 00:19:58.902 "data_size": 63488 00:19:58.902 } 00:19:58.902 ] 00:19:58.902 }' 00:19:58.902 14:21:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.902 14:21:50 -- common/autotest_common.sh@10 -- # set +x 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.470 14:21:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.729 14:21:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:59.729 "name": "raid_bdev1", 00:19:59.729 "uuid": "44e320b8-2b1f-4c82-9990-573de2fc6d6d", 00:19:59.729 "strip_size_kb": 0, 00:19:59.729 "state": "online", 00:19:59.729 "raid_level": "raid1", 00:19:59.729 "superblock": true, 00:19:59.729 "num_base_bdevs": 4, 00:19:59.729 "num_base_bdevs_discovered": 3, 00:19:59.729 "num_base_bdevs_operational": 3, 00:19:59.729 "base_bdevs_list": [ 00:19:59.729 { 00:19:59.729 "name": "spare", 00:19:59.729 "uuid": "d72fcfc0-9110-5376-95ca-55508ff3dae5", 00:19:59.729 "is_configured": true, 00:19:59.729 "data_offset": 2048, 00:19:59.729 "data_size": 63488 00:19:59.729 }, 00:19:59.729 { 00:19:59.729 "name": null, 00:19:59.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.729 "is_configured": false, 00:19:59.729 "data_offset": 2048, 00:19:59.730 "data_size": 63488 00:19:59.730 }, 00:19:59.730 { 00:19:59.730 "name": "BaseBdev3", 00:19:59.730 "uuid": "964e8de0-5fc6-59ab-8328-ac22df7cd929", 00:19:59.730 "is_configured": true, 00:19:59.730 "data_offset": 2048, 00:19:59.730 "data_size": 63488 00:19:59.730 }, 00:19:59.730 { 00:19:59.730 "name": "BaseBdev4", 00:19:59.730 "uuid": "16dfcbb6-e643-58d4-8889-71420f8a317a", 00:19:59.730 "is_configured": true, 00:19:59.730 "data_offset": 2048, 00:19:59.730 "data_size": 63488 00:19:59.730 } 00:19:59.730 ] 00:19:59.730 }' 00:19:59.730 14:21:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:59.730 14:21:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:59.730 14:21:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:59.730 14:21:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:59.730 14:21:51 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:59.730 14:21:51 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.988 14:21:52 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.988 14:21:52 -- bdev/bdev_raid.sh@709 -- # killprocess 135077 00:19:59.988 14:21:52 -- common/autotest_common.sh@936 -- # '[' -z 135077 ']' 00:19:59.988 14:21:52 -- common/autotest_common.sh@940 -- # kill -0 135077 00:19:59.988 14:21:52 -- common/autotest_common.sh@941 -- # uname 00:19:59.988 14:21:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.988 14:21:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135077 00:19:59.988 14:21:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.988 14:21:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.988 killing process with pid 135077 00:19:59.988 14:21:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135077' 00:19:59.988 Received shutdown signal, test time was about 60.000000 seconds 00:19:59.988 00:19:59.988 Latency(us) 00:19:59.988 [2024-11-18T14:21:52.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.988 [2024-11-18T14:21:52.062Z] =================================================================================================================== 00:19:59.988 [2024-11-18T14:21:52.062Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.988 14:21:52 -- common/autotest_common.sh@955 -- # kill 135077 00:19:59.988 [2024-11-18 14:21:52.060409] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.988 [2024-11-18 14:21:52.060473] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.988 14:21:52 -- common/autotest_common.sh@960 -- # wait 135077 00:19:59.988 [2024-11-18 14:21:52.060539] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.988 [2024-11-18 14:21:52.060551] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:20:00.247 [2024-11-18 14:21:52.116125] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:00.506 00:20:00.506 real 0m25.305s 00:20:00.506 user 0m37.314s 00:20:00.506 sys 0m3.810s 00:20:00.506 14:21:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:00.506 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 ************************************ 00:20:00.506 END TEST raid_rebuild_test_sb 00:20:00.506 ************************************ 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:20:00.506 14:21:52 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:00.506 14:21:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:00.506 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 ************************************ 00:20:00.506 START TEST raid_rebuild_test_io 00:20:00.506 ************************************ 00:20:00.506 14:21:52 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@544 -- # raid_pid=135717 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135717 /var/tmp/spdk-raid.sock 00:20:00.506 14:21:52 -- common/autotest_common.sh@829 -- # '[' -z 135717 ']' 00:20:00.506 14:21:52 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:00.506 14:21:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.506 14:21:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.506 14:21:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.506 14:21:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.506 14:21:52 -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 [2024-11-18 14:21:52.538703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:00.506 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.506 Zero copy mechanism will not be used. 00:20:00.506 [2024-11-18 14:21:52.538881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135717 ] 00:20:00.765 [2024-11-18 14:21:52.676795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.765 [2024-11-18 14:21:52.749053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.765 [2024-11-18 14:21:52.818897] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.700 14:21:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.700 14:21:53 -- common/autotest_common.sh@862 -- # return 0 00:20:01.700 14:21:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.700 14:21:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:01.700 14:21:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:01.700 BaseBdev1 00:20:01.700 14:21:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.700 14:21:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:01.700 14:21:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:01.959 BaseBdev2 00:20:01.959 14:21:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.959 14:21:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:01.959 14:21:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:02.218 BaseBdev3 00:20:02.218 14:21:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:02.218 14:21:54 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:02.218 14:21:54 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:02.476 BaseBdev4 00:20:02.476 14:21:54 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:02.735 spare_malloc 00:20:02.735 14:21:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:02.993 spare_delay 00:20:02.993 14:21:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:02.993 [2024-11-18 14:21:55.037664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:02.993 [2024-11-18 14:21:55.037785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.993 [2024-11-18 14:21:55.037831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:02.993 [2024-11-18 14:21:55.037878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.993 [2024-11-18 14:21:55.040322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.993 [2024-11-18 14:21:55.040376] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:02.993 spare 00:20:02.993 14:21:55 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:03.252 [2024-11-18 14:21:55.221733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.252 [2024-11-18 14:21:55.223715] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.252 [2024-11-18 14:21:55.223773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:03.252 [2024-11-18 14:21:55.223810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:03.252 [2024-11-18 14:21:55.223891] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:20:03.252 [2024-11-18 14:21:55.223903] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:03.252 [2024-11-18 14:21:55.224036] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:03.252 [2024-11-18 14:21:55.224406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:20:03.252 [2024-11-18 14:21:55.224427] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:20:03.252 [2024-11-18 14:21:55.224594] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.252 14:21:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.511 14:21:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.511 "name": "raid_bdev1", 00:20:03.511 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:03.511 "strip_size_kb": 0, 00:20:03.511 "state": "online", 00:20:03.511 "raid_level": "raid1", 00:20:03.511 "superblock": false, 00:20:03.511 "num_base_bdevs": 4, 00:20:03.511 "num_base_bdevs_discovered": 4, 00:20:03.511 "num_base_bdevs_operational": 4, 00:20:03.511 "base_bdevs_list": [ 00:20:03.511 { 00:20:03.511 "name": "BaseBdev1", 00:20:03.511 "uuid": "990a6ce5-68ca-489a-9b99-d7af30384e24", 00:20:03.511 "is_configured": true, 00:20:03.511 "data_offset": 0, 00:20:03.511 "data_size": 65536 00:20:03.511 }, 00:20:03.511 { 00:20:03.511 "name": "BaseBdev2", 00:20:03.511 "uuid": "e1064c59-06f3-4c1f-a5af-0377a1ea2eda", 00:20:03.511 "is_configured": true, 00:20:03.511 "data_offset": 0, 00:20:03.511 "data_size": 65536 00:20:03.511 }, 00:20:03.511 { 00:20:03.511 "name": "BaseBdev3", 00:20:03.511 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:03.511 "is_configured": true, 00:20:03.511 "data_offset": 0, 00:20:03.511 "data_size": 65536 00:20:03.511 }, 00:20:03.511 { 00:20:03.511 "name": "BaseBdev4", 00:20:03.511 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:03.511 "is_configured": true, 00:20:03.511 "data_offset": 0, 00:20:03.511 "data_size": 65536 00:20:03.511 } 00:20:03.511 ] 00:20:03.511 }' 00:20:03.511 14:21:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.511 14:21:55 -- common/autotest_common.sh@10 -- # set +x 00:20:04.078 14:21:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.078 14:21:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:04.336 [2024-11-18 14:21:56.298054] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.336 14:21:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:04.336 14:21:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.336 14:21:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:04.595 14:21:56 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:04.595 14:21:56 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:04.595 14:21:56 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:04.595 14:21:56 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:04.595 [2024-11-18 14:21:56.605117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:20:04.595 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:04.595 Zero copy mechanism will not be used. 00:20:04.595 Running I/O for 60 seconds... 00:20:04.854 [2024-11-18 14:21:56.732817] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.854 [2024-11-18 14:21:56.738764] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.854 14:21:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.113 14:21:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.113 "name": "raid_bdev1", 00:20:05.113 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:05.113 "strip_size_kb": 0, 00:20:05.113 "state": "online", 00:20:05.113 "raid_level": "raid1", 00:20:05.113 "superblock": false, 00:20:05.113 "num_base_bdevs": 4, 00:20:05.113 "num_base_bdevs_discovered": 3, 00:20:05.113 "num_base_bdevs_operational": 3, 00:20:05.113 "base_bdevs_list": [ 00:20:05.113 { 00:20:05.113 "name": null, 00:20:05.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.113 "is_configured": false, 00:20:05.113 "data_offset": 0, 00:20:05.113 "data_size": 65536 00:20:05.113 }, 00:20:05.113 { 00:20:05.113 "name": "BaseBdev2", 00:20:05.113 "uuid": "e1064c59-06f3-4c1f-a5af-0377a1ea2eda", 00:20:05.113 "is_configured": true, 00:20:05.113 "data_offset": 0, 00:20:05.113 "data_size": 65536 00:20:05.113 }, 00:20:05.113 { 00:20:05.113 "name": "BaseBdev3", 00:20:05.113 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:05.113 "is_configured": true, 00:20:05.113 "data_offset": 0, 00:20:05.113 "data_size": 65536 00:20:05.113 }, 00:20:05.113 { 00:20:05.113 "name": "BaseBdev4", 00:20:05.113 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:05.113 "is_configured": true, 00:20:05.113 "data_offset": 0, 00:20:05.113 "data_size": 65536 00:20:05.113 } 00:20:05.113 ] 00:20:05.113 }' 00:20:05.113 14:21:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.113 14:21:57 -- common/autotest_common.sh@10 -- # set +x 00:20:05.681 14:21:57 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:05.940 [2024-11-18 14:21:57.841474] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:05.940 [2024-11-18 14:21:57.841554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.940 14:21:57 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:05.940 [2024-11-18 14:21:57.900147] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:05.940 [2024-11-18 14:21:57.902379] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.199 [2024-11-18 14:21:58.018994] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:06.199 [2024-11-18 14:21:58.020346] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:06.199 [2024-11-18 14:21:58.247422] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:06.199 [2024-11-18 14:21:58.247651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:06.457 [2024-11-18 14:21:58.490413] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:06.716 [2024-11-18 14:21:58.607185] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:06.716 [2024-11-18 14:21:58.607806] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.974 14:21:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.974 [2024-11-18 14:21:58.948185] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:06.974 [2024-11-18 14:21:58.949438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:07.240 14:21:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:07.240 "name": "raid_bdev1", 00:20:07.240 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:07.240 "strip_size_kb": 0, 00:20:07.240 "state": "online", 00:20:07.240 "raid_level": "raid1", 00:20:07.240 "superblock": false, 00:20:07.240 "num_base_bdevs": 4, 00:20:07.240 "num_base_bdevs_discovered": 4, 00:20:07.240 "num_base_bdevs_operational": 4, 00:20:07.240 "process": { 00:20:07.240 "type": "rebuild", 00:20:07.240 "target": "spare", 00:20:07.240 "progress": { 00:20:07.240 "blocks": 14336, 00:20:07.240 "percent": 21 00:20:07.240 } 00:20:07.240 }, 00:20:07.240 "base_bdevs_list": [ 00:20:07.240 { 00:20:07.240 "name": "spare", 00:20:07.240 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:07.240 "is_configured": true, 00:20:07.240 "data_offset": 0, 00:20:07.240 "data_size": 65536 00:20:07.240 }, 00:20:07.240 { 00:20:07.240 "name": "BaseBdev2", 00:20:07.240 "uuid": "e1064c59-06f3-4c1f-a5af-0377a1ea2eda", 00:20:07.240 "is_configured": true, 00:20:07.240 "data_offset": 0, 00:20:07.240 "data_size": 65536 00:20:07.240 }, 00:20:07.240 { 00:20:07.240 "name": "BaseBdev3", 00:20:07.240 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:07.240 "is_configured": true, 00:20:07.240 "data_offset": 0, 00:20:07.240 "data_size": 65536 00:20:07.240 }, 00:20:07.240 { 00:20:07.240 "name": "BaseBdev4", 00:20:07.240 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:07.240 "is_configured": true, 00:20:07.240 "data_offset": 0, 00:20:07.240 "data_size": 65536 00:20:07.240 } 00:20:07.240 ] 00:20:07.240 }' 00:20:07.240 14:21:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:07.240 14:21:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.240 14:21:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:07.240 14:21:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.240 14:21:59 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:07.501 [2024-11-18 14:21:59.402986] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.501 [2024-11-18 14:21:59.432888] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:07.501 [2024-11-18 14:21:59.532971] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:07.501 [2024-11-18 14:21:59.542588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.501 [2024-11-18 14:21:59.557709] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.759 14:21:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.760 14:21:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.760 14:21:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.018 14:21:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.018 "name": "raid_bdev1", 00:20:08.018 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:08.018 "strip_size_kb": 0, 00:20:08.018 "state": "online", 00:20:08.018 "raid_level": "raid1", 00:20:08.018 "superblock": false, 00:20:08.018 "num_base_bdevs": 4, 00:20:08.018 "num_base_bdevs_discovered": 3, 00:20:08.018 "num_base_bdevs_operational": 3, 00:20:08.019 "base_bdevs_list": [ 00:20:08.019 { 00:20:08.019 "name": null, 00:20:08.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.019 "is_configured": false, 00:20:08.019 "data_offset": 0, 00:20:08.019 "data_size": 65536 00:20:08.019 }, 00:20:08.019 { 00:20:08.019 "name": "BaseBdev2", 00:20:08.019 "uuid": "e1064c59-06f3-4c1f-a5af-0377a1ea2eda", 00:20:08.019 "is_configured": true, 00:20:08.019 "data_offset": 0, 00:20:08.019 "data_size": 65536 00:20:08.019 }, 00:20:08.019 { 00:20:08.019 "name": "BaseBdev3", 00:20:08.019 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:08.019 "is_configured": true, 00:20:08.019 "data_offset": 0, 00:20:08.019 "data_size": 65536 00:20:08.019 }, 00:20:08.019 { 00:20:08.019 "name": "BaseBdev4", 00:20:08.019 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:08.019 "is_configured": true, 00:20:08.019 "data_offset": 0, 00:20:08.019 "data_size": 65536 00:20:08.019 } 00:20:08.019 ] 00:20:08.019 }' 00:20:08.019 14:21:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.019 14:21:59 -- common/autotest_common.sh@10 -- # set +x 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:08.628 "name": "raid_bdev1", 00:20:08.628 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:08.628 "strip_size_kb": 0, 00:20:08.628 "state": "online", 00:20:08.628 "raid_level": "raid1", 00:20:08.628 "superblock": false, 00:20:08.628 "num_base_bdevs": 4, 00:20:08.628 "num_base_bdevs_discovered": 3, 00:20:08.628 "num_base_bdevs_operational": 3, 00:20:08.628 "base_bdevs_list": [ 00:20:08.628 { 00:20:08.628 "name": null, 00:20:08.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.628 "is_configured": false, 00:20:08.628 "data_offset": 0, 00:20:08.628 "data_size": 65536 00:20:08.628 }, 00:20:08.628 { 00:20:08.628 "name": "BaseBdev2", 00:20:08.628 "uuid": "e1064c59-06f3-4c1f-a5af-0377a1ea2eda", 00:20:08.628 "is_configured": true, 00:20:08.628 "data_offset": 0, 00:20:08.628 "data_size": 65536 00:20:08.628 }, 00:20:08.628 { 00:20:08.628 "name": "BaseBdev3", 00:20:08.628 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:08.628 "is_configured": true, 00:20:08.628 "data_offset": 0, 00:20:08.628 "data_size": 65536 00:20:08.628 }, 00:20:08.628 { 00:20:08.628 "name": "BaseBdev4", 00:20:08.628 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:08.628 "is_configured": true, 00:20:08.628 "data_offset": 0, 00:20:08.628 "data_size": 65536 00:20:08.628 } 00:20:08.628 ] 00:20:08.628 }' 00:20:08.628 14:22:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:08.899 14:22:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:08.899 14:22:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:08.899 14:22:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:08.899 14:22:00 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:09.157 [2024-11-18 14:22:01.046583] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:09.158 [2024-11-18 14:22:01.046657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:09.158 14:22:01 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:09.158 [2024-11-18 14:22:01.098459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:20:09.158 [2024-11-18 14:22:01.100534] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:09.158 [2024-11-18 14:22:01.216191] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:09.158 [2024-11-18 14:22:01.216666] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:09.415 [2024-11-18 14:22:01.339517] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:09.415 [2024-11-18 14:22:01.340101] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:09.673 [2024-11-18 14:22:01.683150] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:09.673 [2024-11-18 14:22:01.683601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:09.932 [2024-11-18 14:22:01.813006] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.191 14:22:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.191 [2024-11-18 14:22:02.154896] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:10.191 [2024-11-18 14:22:02.156088] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.450 "name": "raid_bdev1", 00:20:10.450 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:10.450 "strip_size_kb": 0, 00:20:10.450 "state": "online", 00:20:10.450 "raid_level": "raid1", 00:20:10.450 "superblock": false, 00:20:10.450 "num_base_bdevs": 4, 00:20:10.450 "num_base_bdevs_discovered": 4, 00:20:10.450 "num_base_bdevs_operational": 4, 00:20:10.450 "process": { 00:20:10.450 "type": "rebuild", 00:20:10.450 "target": "spare", 00:20:10.450 "progress": { 00:20:10.450 "blocks": 14336, 00:20:10.450 "percent": 21 00:20:10.450 } 00:20:10.450 }, 00:20:10.450 "base_bdevs_list": [ 00:20:10.450 { 00:20:10.450 "name": "spare", 00:20:10.450 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:10.450 "is_configured": true, 00:20:10.450 "data_offset": 0, 00:20:10.450 "data_size": 65536 00:20:10.450 }, 00:20:10.450 { 00:20:10.450 "name": "BaseBdev2", 00:20:10.450 "uuid": "e1064c59-06f3-4c1f-a5af-0377a1ea2eda", 00:20:10.450 "is_configured": true, 00:20:10.450 "data_offset": 0, 00:20:10.450 "data_size": 65536 00:20:10.450 }, 00:20:10.450 { 00:20:10.450 "name": "BaseBdev3", 00:20:10.450 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:10.450 "is_configured": true, 00:20:10.450 "data_offset": 0, 00:20:10.450 "data_size": 65536 00:20:10.450 }, 00:20:10.450 { 00:20:10.450 "name": "BaseBdev4", 00:20:10.450 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:10.450 "is_configured": true, 00:20:10.450 "data_offset": 0, 00:20:10.450 "data_size": 65536 00:20:10.450 } 00:20:10.450 ] 00:20:10.450 }' 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.450 [2024-11-18 14:22:02.370002] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:10.450 [2024-11-18 14:22:02.370327] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:10.450 14:22:02 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:10.709 [2024-11-18 14:22:02.654097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:10.709 [2024-11-18 14:22:02.720188] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:10.968 [2024-11-18 14:22:02.836367] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002390 00:20:10.968 [2024-11-18 14:22:02.836399] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600 00:20:10.968 [2024-11-18 14:22:02.838156] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.968 14:22:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.227 "name": "raid_bdev1", 00:20:11.227 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:11.227 "strip_size_kb": 0, 00:20:11.227 "state": "online", 00:20:11.227 "raid_level": "raid1", 00:20:11.227 "superblock": false, 00:20:11.227 "num_base_bdevs": 4, 00:20:11.227 "num_base_bdevs_discovered": 3, 00:20:11.227 "num_base_bdevs_operational": 3, 00:20:11.227 "process": { 00:20:11.227 "type": "rebuild", 00:20:11.227 "target": "spare", 00:20:11.227 "progress": { 00:20:11.227 "blocks": 22528, 00:20:11.227 "percent": 34 00:20:11.227 } 00:20:11.227 }, 00:20:11.227 "base_bdevs_list": [ 00:20:11.227 { 00:20:11.227 "name": "spare", 00:20:11.227 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:11.227 "is_configured": true, 00:20:11.227 "data_offset": 0, 00:20:11.227 "data_size": 65536 00:20:11.227 }, 00:20:11.227 { 00:20:11.227 "name": null, 00:20:11.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.227 "is_configured": false, 00:20:11.227 "data_offset": 0, 00:20:11.227 "data_size": 65536 00:20:11.227 }, 00:20:11.227 { 00:20:11.227 "name": "BaseBdev3", 00:20:11.227 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:11.227 "is_configured": true, 00:20:11.227 "data_offset": 0, 00:20:11.227 "data_size": 65536 00:20:11.227 }, 00:20:11.227 { 00:20:11.227 "name": "BaseBdev4", 00:20:11.227 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:11.227 "is_configured": true, 00:20:11.227 "data_offset": 0, 00:20:11.227 "data_size": 65536 00:20:11.227 } 00:20:11.227 ] 00:20:11.227 }' 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@657 -- # local timeout=479 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.227 14:22:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.486 14:22:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:11.486 "name": "raid_bdev1", 00:20:11.486 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:11.486 "strip_size_kb": 0, 00:20:11.486 "state": "online", 00:20:11.486 "raid_level": "raid1", 00:20:11.486 "superblock": false, 00:20:11.486 "num_base_bdevs": 4, 00:20:11.486 "num_base_bdevs_discovered": 3, 00:20:11.486 "num_base_bdevs_operational": 3, 00:20:11.486 "process": { 00:20:11.486 "type": "rebuild", 00:20:11.486 "target": "spare", 00:20:11.486 "progress": { 00:20:11.486 "blocks": 28672, 00:20:11.486 "percent": 43 00:20:11.486 } 00:20:11.486 }, 00:20:11.486 "base_bdevs_list": [ 00:20:11.486 { 00:20:11.486 "name": "spare", 00:20:11.486 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:11.486 "is_configured": true, 00:20:11.486 "data_offset": 0, 00:20:11.486 "data_size": 65536 00:20:11.486 }, 00:20:11.486 { 00:20:11.486 "name": null, 00:20:11.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.486 "is_configured": false, 00:20:11.486 "data_offset": 0, 00:20:11.486 "data_size": 65536 00:20:11.486 }, 00:20:11.486 { 00:20:11.486 "name": "BaseBdev3", 00:20:11.486 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:11.486 "is_configured": true, 00:20:11.486 "data_offset": 0, 00:20:11.486 "data_size": 65536 00:20:11.486 }, 00:20:11.486 { 00:20:11.486 "name": "BaseBdev4", 00:20:11.486 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:11.486 "is_configured": true, 00:20:11.486 "data_offset": 0, 00:20:11.486 "data_size": 65536 00:20:11.487 } 00:20:11.487 ] 00:20:11.487 }' 00:20:11.487 14:22:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:11.487 14:22:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:11.487 14:22:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:11.487 14:22:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.487 14:22:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:11.745 [2024-11-18 14:22:03.582838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:12.004 [2024-11-18 14:22:04.008317] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:12.004 [2024-11-18 14:22:04.008755] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:12.263 [2024-11-18 14:22:04.124690] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:12.521 [2024-11-18 14:22:04.428849] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:12.521 14:22:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.522 14:22:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.780 14:22:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.780 "name": "raid_bdev1", 00:20:12.780 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:12.780 "strip_size_kb": 0, 00:20:12.780 "state": "online", 00:20:12.780 "raid_level": "raid1", 00:20:12.780 "superblock": false, 00:20:12.780 "num_base_bdevs": 4, 00:20:12.780 "num_base_bdevs_discovered": 3, 00:20:12.780 "num_base_bdevs_operational": 3, 00:20:12.780 "process": { 00:20:12.780 "type": "rebuild", 00:20:12.780 "target": "spare", 00:20:12.780 "progress": { 00:20:12.780 "blocks": 49152, 00:20:12.780 "percent": 75 00:20:12.780 } 00:20:12.780 }, 00:20:12.780 "base_bdevs_list": [ 00:20:12.780 { 00:20:12.780 "name": "spare", 00:20:12.780 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:12.780 "is_configured": true, 00:20:12.780 "data_offset": 0, 00:20:12.780 "data_size": 65536 00:20:12.780 }, 00:20:12.780 { 00:20:12.780 "name": null, 00:20:12.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.780 "is_configured": false, 00:20:12.780 "data_offset": 0, 00:20:12.780 "data_size": 65536 00:20:12.780 }, 00:20:12.780 { 00:20:12.780 "name": "BaseBdev3", 00:20:12.780 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:12.780 "is_configured": true, 00:20:12.780 "data_offset": 0, 00:20:12.780 "data_size": 65536 00:20:12.780 }, 00:20:12.780 { 00:20:12.780 "name": "BaseBdev4", 00:20:12.780 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:12.780 "is_configured": true, 00:20:12.780 "data_offset": 0, 00:20:12.780 "data_size": 65536 00:20:12.780 } 00:20:12.780 ] 00:20:12.780 }' 00:20:12.780 14:22:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.780 14:22:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.780 14:22:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.780 14:22:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.780 14:22:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:13.039 [2024-11-18 14:22:05.063077] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:13.607 [2024-11-18 14:22:05.467580] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:13.607 [2024-11-18 14:22:05.535830] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:13.607 [2024-11-18 14:22:05.537440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.864 14:22:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:14.122 "name": "raid_bdev1", 00:20:14.122 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:14.122 "strip_size_kb": 0, 00:20:14.122 "state": "online", 00:20:14.122 "raid_level": "raid1", 00:20:14.122 "superblock": false, 00:20:14.122 "num_base_bdevs": 4, 00:20:14.122 "num_base_bdevs_discovered": 3, 00:20:14.122 "num_base_bdevs_operational": 3, 00:20:14.122 "base_bdevs_list": [ 00:20:14.122 { 00:20:14.122 "name": "spare", 00:20:14.122 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:14.122 "is_configured": true, 00:20:14.122 "data_offset": 0, 00:20:14.122 "data_size": 65536 00:20:14.122 }, 00:20:14.122 { 00:20:14.122 "name": null, 00:20:14.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.122 "is_configured": false, 00:20:14.122 "data_offset": 0, 00:20:14.122 "data_size": 65536 00:20:14.122 }, 00:20:14.122 { 00:20:14.122 "name": "BaseBdev3", 00:20:14.122 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:14.122 "is_configured": true, 00:20:14.122 "data_offset": 0, 00:20:14.122 "data_size": 65536 00:20:14.122 }, 00:20:14.122 { 00:20:14.122 "name": "BaseBdev4", 00:20:14.122 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:14.122 "is_configured": true, 00:20:14.122 "data_offset": 0, 00:20:14.122 "data_size": 65536 00:20:14.122 } 00:20:14.122 ] 00:20:14.122 }' 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@660 -- # break 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:14.122 14:22:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.381 14:22:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.381 14:22:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:14.381 "name": "raid_bdev1", 00:20:14.381 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:14.381 "strip_size_kb": 0, 00:20:14.381 "state": "online", 00:20:14.381 "raid_level": "raid1", 00:20:14.381 "superblock": false, 00:20:14.381 "num_base_bdevs": 4, 00:20:14.381 "num_base_bdevs_discovered": 3, 00:20:14.381 "num_base_bdevs_operational": 3, 00:20:14.381 "base_bdevs_list": [ 00:20:14.381 { 00:20:14.381 "name": "spare", 00:20:14.381 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:14.381 "is_configured": true, 00:20:14.381 "data_offset": 0, 00:20:14.381 "data_size": 65536 00:20:14.381 }, 00:20:14.381 { 00:20:14.381 "name": null, 00:20:14.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.381 "is_configured": false, 00:20:14.381 "data_offset": 0, 00:20:14.381 "data_size": 65536 00:20:14.381 }, 00:20:14.381 { 00:20:14.381 "name": "BaseBdev3", 00:20:14.381 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:14.381 "is_configured": true, 00:20:14.381 "data_offset": 0, 00:20:14.381 "data_size": 65536 00:20:14.381 }, 00:20:14.381 { 00:20:14.381 "name": "BaseBdev4", 00:20:14.381 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:14.381 "is_configured": true, 00:20:14.381 "data_offset": 0, 00:20:14.381 "data_size": 65536 00:20:14.381 } 00:20:14.381 ] 00:20:14.381 }' 00:20:14.381 14:22:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.640 14:22:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.899 14:22:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.899 "name": "raid_bdev1", 00:20:14.899 "uuid": "f92bf1a4-f07c-48bf-9f0e-cbee57cb48a8", 00:20:14.899 "strip_size_kb": 0, 00:20:14.899 "state": "online", 00:20:14.899 "raid_level": "raid1", 00:20:14.899 "superblock": false, 00:20:14.899 "num_base_bdevs": 4, 00:20:14.899 "num_base_bdevs_discovered": 3, 00:20:14.899 "num_base_bdevs_operational": 3, 00:20:14.899 "base_bdevs_list": [ 00:20:14.899 { 00:20:14.899 "name": "spare", 00:20:14.899 "uuid": "a73bb826-32bf-56ac-9ed3-46743c86bfd3", 00:20:14.899 "is_configured": true, 00:20:14.899 "data_offset": 0, 00:20:14.899 "data_size": 65536 00:20:14.899 }, 00:20:14.899 { 00:20:14.899 "name": null, 00:20:14.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.899 "is_configured": false, 00:20:14.899 "data_offset": 0, 00:20:14.899 "data_size": 65536 00:20:14.899 }, 00:20:14.899 { 00:20:14.899 "name": "BaseBdev3", 00:20:14.899 "uuid": "7c2544a0-9a8b-42a7-9d7e-30a8421b209e", 00:20:14.899 "is_configured": true, 00:20:14.899 "data_offset": 0, 00:20:14.899 "data_size": 65536 00:20:14.899 }, 00:20:14.899 { 00:20:14.899 "name": "BaseBdev4", 00:20:14.899 "uuid": "6717cf27-40d5-48cd-8ea2-83734bde9b5c", 00:20:14.899 "is_configured": true, 00:20:14.899 "data_offset": 0, 00:20:14.899 "data_size": 65536 00:20:14.899 } 00:20:14.899 ] 00:20:14.899 }' 00:20:14.899 14:22:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.899 14:22:06 -- common/autotest_common.sh@10 -- # set +x 00:20:15.466 14:22:07 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:15.725 [2024-11-18 14:22:07.564101] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.725 [2024-11-18 14:22:07.564158] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.725 00:20:15.725 Latency(us) 00:20:15.725 [2024-11-18T14:22:07.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.725 [2024-11-18T14:22:07.799Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:15.725 raid_bdev1 : 10.98 110.32 330.97 0.00 0.00 13129.54 283.00 116296.61 00:20:15.725 [2024-11-18T14:22:07.799Z] =================================================================================================================== 00:20:15.725 [2024-11-18T14:22:07.799Z] Total : 110.32 330.97 0.00 0.00 13129.54 283.00 116296.61 00:20:15.725 [2024-11-18 14:22:07.587493] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.725 [2024-11-18 14:22:07.587542] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.725 [2024-11-18 14:22:07.587641] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.725 [2024-11-18 14:22:07.587654] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:20:15.725 0 00:20:15.725 14:22:07 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.725 14:22:07 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:15.983 14:22:07 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:15.983 14:22:07 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:15.983 14:22:07 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@12 -- # local i 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:15.983 14:22:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:16.242 /dev/nbd0 00:20:16.242 14:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:16.242 14:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:16.242 14:22:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:16.242 14:22:08 -- common/autotest_common.sh@867 -- # local i 00:20:16.242 14:22:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:16.242 14:22:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:16.242 14:22:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:16.242 14:22:08 -- common/autotest_common.sh@871 -- # break 00:20:16.242 14:22:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:16.242 14:22:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:16.242 14:22:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:16.242 1+0 records in 00:20:16.242 1+0 records out 00:20:16.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054615 s, 7.5 MB/s 00:20:16.243 14:22:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.243 14:22:08 -- common/autotest_common.sh@884 -- # size=4096 00:20:16.243 14:22:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.243 14:22:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:16.243 14:22:08 -- common/autotest_common.sh@887 -- # return 0 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.243 14:22:08 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:16.243 14:22:08 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:20:16.243 14:22:08 -- bdev/bdev_raid.sh@678 -- # continue 00:20:16.243 14:22:08 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:16.243 14:22:08 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:20:16.243 14:22:08 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@12 -- # local i 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.243 14:22:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:16.502 /dev/nbd1 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:16.502 14:22:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:16.502 14:22:08 -- common/autotest_common.sh@867 -- # local i 00:20:16.502 14:22:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:16.502 14:22:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:16.502 14:22:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:16.502 14:22:08 -- common/autotest_common.sh@871 -- # break 00:20:16.502 14:22:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:16.502 14:22:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:16.502 14:22:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:16.502 1+0 records in 00:20:16.502 1+0 records out 00:20:16.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051934 s, 7.9 MB/s 00:20:16.502 14:22:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.502 14:22:08 -- common/autotest_common.sh@884 -- # size=4096 00:20:16.502 14:22:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:16.502 14:22:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:16.502 14:22:08 -- common/autotest_common.sh@887 -- # return 0 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.502 14:22:08 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:16.502 14:22:08 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@51 -- # local i 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.502 14:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:16.760 14:22:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@41 -- # break 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.761 14:22:08 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:16.761 14:22:08 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:20:16.761 14:22:08 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@12 -- # local i 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:16.761 14:22:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:17.019 /dev/nbd1 00:20:17.019 14:22:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:17.019 14:22:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:17.019 14:22:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:17.019 14:22:08 -- common/autotest_common.sh@867 -- # local i 00:20:17.019 14:22:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:17.019 14:22:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:17.019 14:22:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:17.019 14:22:08 -- common/autotest_common.sh@871 -- # break 00:20:17.019 14:22:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:17.019 14:22:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:17.019 14:22:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:17.019 1+0 records in 00:20:17.019 1+0 records out 00:20:17.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433915 s, 9.4 MB/s 00:20:17.019 14:22:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.019 14:22:08 -- common/autotest_common.sh@884 -- # size=4096 00:20:17.019 14:22:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:17.019 14:22:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:17.019 14:22:08 -- common/autotest_common.sh@887 -- # return 0 00:20:17.019 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:17.020 14:22:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:17.020 14:22:08 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:17.020 14:22:09 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:17.020 14:22:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:17.020 14:22:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:17.020 14:22:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:17.020 14:22:09 -- bdev/nbd_common.sh@51 -- # local i 00:20:17.020 14:22:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.020 14:22:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@41 -- # break 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.278 14:22:09 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@51 -- # local i 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.278 14:22:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@41 -- # break 00:20:17.537 14:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.537 14:22:09 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:17.537 14:22:09 -- bdev/bdev_raid.sh@709 -- # killprocess 135717 00:20:17.537 14:22:09 -- common/autotest_common.sh@936 -- # '[' -z 135717 ']' 00:20:17.537 14:22:09 -- common/autotest_common.sh@940 -- # kill -0 135717 00:20:17.537 14:22:09 -- common/autotest_common.sh@941 -- # uname 00:20:17.537 14:22:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.537 14:22:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135717 00:20:17.537 killing process with pid 135717 00:20:17.537 Received shutdown signal, test time was about 12.955404 seconds 00:20:17.537 00:20:17.537 Latency(us) 00:20:17.537 [2024-11-18T14:22:09.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.537 [2024-11-18T14:22:09.611Z] =================================================================================================================== 00:20:17.537 [2024-11-18T14:22:09.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.537 14:22:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:17.537 14:22:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:17.537 14:22:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135717' 00:20:17.537 14:22:09 -- common/autotest_common.sh@955 -- # kill 135717 00:20:17.537 14:22:09 -- common/autotest_common.sh@960 -- # wait 135717 00:20:17.537 [2024-11-18 14:22:09.562654] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:17.796 [2024-11-18 14:22:09.615009] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:18.054 14:22:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:18.054 00:20:18.054 real 0m17.470s 00:20:18.054 user 0m27.900s 00:20:18.054 sys 0m2.069s 00:20:18.054 ************************************ 00:20:18.054 END TEST raid_rebuild_test_io 00:20:18.054 ************************************ 00:20:18.054 14:22:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:18.054 14:22:09 -- common/autotest_common.sh@10 -- # set +x 00:20:18.054 14:22:09 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:20:18.054 14:22:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:18.054 14:22:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.054 14:22:09 -- common/autotest_common.sh@10 -- # set +x 00:20:18.054 ************************************ 00:20:18.054 START TEST raid_rebuild_test_sb_io 00:20:18.054 ************************************ 00:20:18.054 14:22:10 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:18.054 14:22:10 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@544 -- # raid_pid=136214 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136214 /var/tmp/spdk-raid.sock 00:20:18.055 14:22:10 -- common/autotest_common.sh@829 -- # '[' -z 136214 ']' 00:20:18.055 14:22:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:18.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:18.055 14:22:10 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:18.055 14:22:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.055 14:22:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:18.055 14:22:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.055 14:22:10 -- common/autotest_common.sh@10 -- # set +x 00:20:18.055 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:18.055 Zero copy mechanism will not be used. 00:20:18.055 [2024-11-18 14:22:10.072351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:18.055 [2024-11-18 14:22:10.072576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136214 ] 00:20:18.313 [2024-11-18 14:22:10.210394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.313 [2024-11-18 14:22:10.278380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.313 [2024-11-18 14:22:10.348278] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:18.880 14:22:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.880 14:22:10 -- common/autotest_common.sh@862 -- # return 0 00:20:18.880 14:22:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:18.880 14:22:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:18.880 14:22:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:19.139 BaseBdev1_malloc 00:20:19.139 14:22:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:19.398 [2024-11-18 14:22:11.420253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:19.398 [2024-11-18 14:22:11.420356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.398 [2024-11-18 14:22:11.420403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:19.398 [2024-11-18 14:22:11.420447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.398 [2024-11-18 14:22:11.422863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.398 [2024-11-18 14:22:11.422921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:19.398 BaseBdev1 00:20:19.398 14:22:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:19.398 14:22:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:19.398 14:22:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:19.657 BaseBdev2_malloc 00:20:19.657 14:22:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:19.916 [2024-11-18 14:22:11.805663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:19.916 [2024-11-18 14:22:11.805722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.916 [2024-11-18 14:22:11.805756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:19.916 [2024-11-18 14:22:11.805802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.916 [2024-11-18 14:22:11.808007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.916 [2024-11-18 14:22:11.808052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:19.916 BaseBdev2 00:20:19.916 14:22:11 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:19.916 14:22:11 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:19.916 14:22:11 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:20.175 BaseBdev3_malloc 00:20:20.175 14:22:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:20.175 [2024-11-18 14:22:12.213675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:20.175 [2024-11-18 14:22:12.213729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.175 [2024-11-18 14:22:12.213763] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:20.175 [2024-11-18 14:22:12.213806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.175 [2024-11-18 14:22:12.216030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.175 [2024-11-18 14:22:12.216079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:20.175 BaseBdev3 00:20:20.175 14:22:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:20.175 14:22:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:20.175 14:22:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:20.433 BaseBdev4_malloc 00:20:20.433 14:22:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:20.692 [2024-11-18 14:22:12.647190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:20.692 [2024-11-18 14:22:12.647256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.692 [2024-11-18 14:22:12.647285] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:20.692 [2024-11-18 14:22:12.647331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.692 [2024-11-18 14:22:12.649516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.692 [2024-11-18 14:22:12.649581] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:20.692 BaseBdev4 00:20:20.692 14:22:12 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:20.951 spare_malloc 00:20:20.951 14:22:12 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:20.951 spare_delay 00:20:21.211 14:22:13 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:21.211 [2024-11-18 14:22:13.216689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.211 [2024-11-18 14:22:13.216756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.211 [2024-11-18 14:22:13.216786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:21.211 [2024-11-18 14:22:13.216826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.211 [2024-11-18 14:22:13.219078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.211 [2024-11-18 14:22:13.219127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.211 spare 00:20:21.211 14:22:13 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:21.470 [2024-11-18 14:22:13.404798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.470 [2024-11-18 14:22:13.406816] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.470 [2024-11-18 14:22:13.406890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.470 [2024-11-18 14:22:13.406942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:21.470 [2024-11-18 14:22:13.407168] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:21.470 [2024-11-18 14:22:13.407183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:21.470 [2024-11-18 14:22:13.407297] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:20:21.470 [2024-11-18 14:22:13.407685] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:21.470 [2024-11-18 14:22:13.407699] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:21.470 [2024-11-18 14:22:13.407837] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.470 14:22:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.729 14:22:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.729 "name": "raid_bdev1", 00:20:21.729 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:21.729 "strip_size_kb": 0, 00:20:21.729 "state": "online", 00:20:21.729 "raid_level": "raid1", 00:20:21.729 "superblock": true, 00:20:21.729 "num_base_bdevs": 4, 00:20:21.729 "num_base_bdevs_discovered": 4, 00:20:21.729 "num_base_bdevs_operational": 4, 00:20:21.729 "base_bdevs_list": [ 00:20:21.729 { 00:20:21.729 "name": "BaseBdev1", 00:20:21.729 "uuid": "509a20c3-a081-5ac2-86d6-b0c839fbcede", 00:20:21.729 "is_configured": true, 00:20:21.729 "data_offset": 2048, 00:20:21.729 "data_size": 63488 00:20:21.729 }, 00:20:21.729 { 00:20:21.729 "name": "BaseBdev2", 00:20:21.729 "uuid": "413c589f-c754-50ba-9300-113dfb2dc161", 00:20:21.729 "is_configured": true, 00:20:21.729 "data_offset": 2048, 00:20:21.729 "data_size": 63488 00:20:21.729 }, 00:20:21.729 { 00:20:21.729 "name": "BaseBdev3", 00:20:21.729 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:21.729 "is_configured": true, 00:20:21.729 "data_offset": 2048, 00:20:21.729 "data_size": 63488 00:20:21.729 }, 00:20:21.729 { 00:20:21.729 "name": "BaseBdev4", 00:20:21.729 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:21.729 "is_configured": true, 00:20:21.729 "data_offset": 2048, 00:20:21.729 "data_size": 63488 00:20:21.729 } 00:20:21.729 ] 00:20:21.729 }' 00:20:21.729 14:22:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.729 14:22:13 -- common/autotest_common.sh@10 -- # set +x 00:20:22.296 14:22:14 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:22.296 14:22:14 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:22.555 [2024-11-18 14:22:14.461107] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.555 14:22:14 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:22.555 14:22:14 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.555 14:22:14 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:22.814 14:22:14 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:22.814 14:22:14 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:22.814 14:22:14 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:22.814 14:22:14 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:22.814 [2024-11-18 14:22:14.820133] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:20:22.814 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:22.814 Zero copy mechanism will not be used. 00:20:22.814 Running I/O for 60 seconds... 00:20:23.072 [2024-11-18 14:22:14.974873] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:23.072 [2024-11-18 14:22:14.980836] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:20:23.072 14:22:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.072 14:22:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.330 14:22:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.331 "name": "raid_bdev1", 00:20:23.331 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:23.331 "strip_size_kb": 0, 00:20:23.331 "state": "online", 00:20:23.331 "raid_level": "raid1", 00:20:23.331 "superblock": true, 00:20:23.331 "num_base_bdevs": 4, 00:20:23.331 "num_base_bdevs_discovered": 3, 00:20:23.331 "num_base_bdevs_operational": 3, 00:20:23.331 "base_bdevs_list": [ 00:20:23.331 { 00:20:23.331 "name": null, 00:20:23.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.331 "is_configured": false, 00:20:23.331 "data_offset": 2048, 00:20:23.331 "data_size": 63488 00:20:23.331 }, 00:20:23.331 { 00:20:23.331 "name": "BaseBdev2", 00:20:23.331 "uuid": "413c589f-c754-50ba-9300-113dfb2dc161", 00:20:23.331 "is_configured": true, 00:20:23.331 "data_offset": 2048, 00:20:23.331 "data_size": 63488 00:20:23.331 }, 00:20:23.331 { 00:20:23.331 "name": "BaseBdev3", 00:20:23.331 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:23.331 "is_configured": true, 00:20:23.331 "data_offset": 2048, 00:20:23.331 "data_size": 63488 00:20:23.331 }, 00:20:23.331 { 00:20:23.331 "name": "BaseBdev4", 00:20:23.331 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:23.331 "is_configured": true, 00:20:23.331 "data_offset": 2048, 00:20:23.331 "data_size": 63488 00:20:23.331 } 00:20:23.331 ] 00:20:23.331 }' 00:20:23.331 14:22:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.331 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:20:23.896 14:22:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.154 [2024-11-18 14:22:16.047676] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:24.154 [2024-11-18 14:22:16.047775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.154 14:22:16 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:24.154 [2024-11-18 14:22:16.094620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:20:24.154 [2024-11-18 14:22:16.096797] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:24.154 [2024-11-18 14:22:16.219701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:24.154 [2024-11-18 14:22:16.220190] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:24.412 [2024-11-18 14:22:16.359205] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:24.412 [2024-11-18 14:22:16.359526] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:24.671 [2024-11-18 14:22:16.687548] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:24.929 [2024-11-18 14:22:16.825727] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:24.929 [2024-11-18 14:22:16.825998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.188 14:22:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.447 [2024-11-18 14:22:17.264338] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:25.447 14:22:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:25.447 "name": "raid_bdev1", 00:20:25.447 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:25.447 "strip_size_kb": 0, 00:20:25.447 "state": "online", 00:20:25.447 "raid_level": "raid1", 00:20:25.447 "superblock": true, 00:20:25.447 "num_base_bdevs": 4, 00:20:25.447 "num_base_bdevs_discovered": 4, 00:20:25.447 "num_base_bdevs_operational": 4, 00:20:25.447 "process": { 00:20:25.447 "type": "rebuild", 00:20:25.447 "target": "spare", 00:20:25.447 "progress": { 00:20:25.447 "blocks": 16384, 00:20:25.447 "percent": 25 00:20:25.447 } 00:20:25.447 }, 00:20:25.447 "base_bdevs_list": [ 00:20:25.447 { 00:20:25.447 "name": "spare", 00:20:25.447 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:25.447 "is_configured": true, 00:20:25.447 "data_offset": 2048, 00:20:25.447 "data_size": 63488 00:20:25.447 }, 00:20:25.447 { 00:20:25.447 "name": "BaseBdev2", 00:20:25.447 "uuid": "413c589f-c754-50ba-9300-113dfb2dc161", 00:20:25.447 "is_configured": true, 00:20:25.447 "data_offset": 2048, 00:20:25.447 "data_size": 63488 00:20:25.447 }, 00:20:25.447 { 00:20:25.447 "name": "BaseBdev3", 00:20:25.447 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:25.447 "is_configured": true, 00:20:25.447 "data_offset": 2048, 00:20:25.447 "data_size": 63488 00:20:25.447 }, 00:20:25.447 { 00:20:25.447 "name": "BaseBdev4", 00:20:25.447 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:25.447 "is_configured": true, 00:20:25.447 "data_offset": 2048, 00:20:25.447 "data_size": 63488 00:20:25.447 } 00:20:25.447 ] 00:20:25.447 }' 00:20:25.447 14:22:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:25.447 14:22:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.447 14:22:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:25.447 14:22:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.447 14:22:17 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:25.447 [2024-11-18 14:22:17.497560] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:25.706 [2024-11-18 14:22:17.605737] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:25.706 [2024-11-18 14:22:17.605897] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:25.706 [2024-11-18 14:22:17.663902] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.964 [2024-11-18 14:22:17.833218] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.964 [2024-11-18 14:22:17.841232] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.964 [2024-11-18 14:22:17.855073] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.964 14:22:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.299 14:22:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.299 "name": "raid_bdev1", 00:20:26.299 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:26.299 "strip_size_kb": 0, 00:20:26.299 "state": "online", 00:20:26.299 "raid_level": "raid1", 00:20:26.299 "superblock": true, 00:20:26.299 "num_base_bdevs": 4, 00:20:26.299 "num_base_bdevs_discovered": 3, 00:20:26.299 "num_base_bdevs_operational": 3, 00:20:26.299 "base_bdevs_list": [ 00:20:26.299 { 00:20:26.299 "name": null, 00:20:26.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.299 "is_configured": false, 00:20:26.299 "data_offset": 2048, 00:20:26.299 "data_size": 63488 00:20:26.299 }, 00:20:26.299 { 00:20:26.299 "name": "BaseBdev2", 00:20:26.299 "uuid": "413c589f-c754-50ba-9300-113dfb2dc161", 00:20:26.299 "is_configured": true, 00:20:26.299 "data_offset": 2048, 00:20:26.299 "data_size": 63488 00:20:26.299 }, 00:20:26.299 { 00:20:26.299 "name": "BaseBdev3", 00:20:26.299 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:26.299 "is_configured": true, 00:20:26.299 "data_offset": 2048, 00:20:26.299 "data_size": 63488 00:20:26.299 }, 00:20:26.299 { 00:20:26.299 "name": "BaseBdev4", 00:20:26.299 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:26.299 "is_configured": true, 00:20:26.299 "data_offset": 2048, 00:20:26.299 "data_size": 63488 00:20:26.299 } 00:20:26.299 ] 00:20:26.299 }' 00:20:26.299 14:22:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.299 14:22:18 -- common/autotest_common.sh@10 -- # set +x 00:20:26.866 14:22:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.866 14:22:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.866 14:22:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:26.866 14:22:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:26.866 14:22:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.867 14:22:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.867 14:22:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.867 14:22:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.867 "name": "raid_bdev1", 00:20:26.867 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:26.867 "strip_size_kb": 0, 00:20:26.867 "state": "online", 00:20:26.867 "raid_level": "raid1", 00:20:26.867 "superblock": true, 00:20:26.867 "num_base_bdevs": 4, 00:20:26.867 "num_base_bdevs_discovered": 3, 00:20:26.867 "num_base_bdevs_operational": 3, 00:20:26.867 "base_bdevs_list": [ 00:20:26.867 { 00:20:26.867 "name": null, 00:20:26.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.867 "is_configured": false, 00:20:26.867 "data_offset": 2048, 00:20:26.867 "data_size": 63488 00:20:26.867 }, 00:20:26.867 { 00:20:26.867 "name": "BaseBdev2", 00:20:26.867 "uuid": "413c589f-c754-50ba-9300-113dfb2dc161", 00:20:26.867 "is_configured": true, 00:20:26.867 "data_offset": 2048, 00:20:26.867 "data_size": 63488 00:20:26.867 }, 00:20:26.867 { 00:20:26.867 "name": "BaseBdev3", 00:20:26.867 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:26.867 "is_configured": true, 00:20:26.867 "data_offset": 2048, 00:20:26.867 "data_size": 63488 00:20:26.867 }, 00:20:26.867 { 00:20:26.867 "name": "BaseBdev4", 00:20:26.867 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:26.867 "is_configured": true, 00:20:26.867 "data_offset": 2048, 00:20:26.867 "data_size": 63488 00:20:26.867 } 00:20:26.867 ] 00:20:26.867 }' 00:20:26.867 14:22:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.867 14:22:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:26.867 14:22:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:27.125 14:22:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:27.125 14:22:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:27.383 [2024-11-18 14:22:19.209066] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:27.383 [2024-11-18 14:22:19.209136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.383 14:22:19 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:27.383 [2024-11-18 14:22:19.240524] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:20:27.383 [2024-11-18 14:22:19.242619] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.383 [2024-11-18 14:22:19.364504] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:27.642 [2024-11-18 14:22:19.476653] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:27.642 [2024-11-18 14:22:19.477287] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:27.900 [2024-11-18 14:22:19.810397] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:27.900 [2024-11-18 14:22:19.941376] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:28.467 14:22:20 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.468 [2024-11-18 14:22:20.313686] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:28.468 "name": "raid_bdev1", 00:20:28.468 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:28.468 "strip_size_kb": 0, 00:20:28.468 "state": "online", 00:20:28.468 "raid_level": "raid1", 00:20:28.468 "superblock": true, 00:20:28.468 "num_base_bdevs": 4, 00:20:28.468 "num_base_bdevs_discovered": 4, 00:20:28.468 "num_base_bdevs_operational": 4, 00:20:28.468 "process": { 00:20:28.468 "type": "rebuild", 00:20:28.468 "target": "spare", 00:20:28.468 "progress": { 00:20:28.468 "blocks": 18432, 00:20:28.468 "percent": 29 00:20:28.468 } 00:20:28.468 }, 00:20:28.468 "base_bdevs_list": [ 00:20:28.468 { 00:20:28.468 "name": "spare", 00:20:28.468 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:28.468 "is_configured": true, 00:20:28.468 "data_offset": 2048, 00:20:28.468 "data_size": 63488 00:20:28.468 }, 00:20:28.468 { 00:20:28.468 "name": "BaseBdev2", 00:20:28.468 "uuid": "413c589f-c754-50ba-9300-113dfb2dc161", 00:20:28.468 "is_configured": true, 00:20:28.468 "data_offset": 2048, 00:20:28.468 "data_size": 63488 00:20:28.468 }, 00:20:28.468 { 00:20:28.468 "name": "BaseBdev3", 00:20:28.468 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:28.468 "is_configured": true, 00:20:28.468 "data_offset": 2048, 00:20:28.468 "data_size": 63488 00:20:28.468 }, 00:20:28.468 { 00:20:28.468 "name": "BaseBdev4", 00:20:28.468 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:28.468 "is_configured": true, 00:20:28.468 "data_offset": 2048, 00:20:28.468 "data_size": 63488 00:20:28.468 } 00:20:28.468 ] 00:20:28.468 }' 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.468 14:22:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:28.726 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:28.726 14:22:20 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:28.726 [2024-11-18 14:22:20.662020] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:28.985 [2024-11-18 14:22:20.842834] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:28.985 [2024-11-18 14:22:20.890711] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:28.985 [2024-11-18 14:22:20.891161] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:28.985 [2024-11-18 14:22:20.891995] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000026d0 00:20:28.985 [2024-11-18 14:22:20.892023] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002940 00:20:28.985 [2024-11-18 14:22:20.906166] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.985 14:22:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.244 [2024-11-18 14:22:21.126499] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:29.244 [2024-11-18 14:22:21.126726] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:29.244 14:22:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.244 "name": "raid_bdev1", 00:20:29.244 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:29.244 "strip_size_kb": 0, 00:20:29.244 "state": "online", 00:20:29.244 "raid_level": "raid1", 00:20:29.244 "superblock": true, 00:20:29.244 "num_base_bdevs": 4, 00:20:29.244 "num_base_bdevs_discovered": 3, 00:20:29.244 "num_base_bdevs_operational": 3, 00:20:29.244 "process": { 00:20:29.244 "type": "rebuild", 00:20:29.244 "target": "spare", 00:20:29.244 "progress": { 00:20:29.244 "blocks": 28672, 00:20:29.244 "percent": 45 00:20:29.244 } 00:20:29.244 }, 00:20:29.244 "base_bdevs_list": [ 00:20:29.244 { 00:20:29.244 "name": "spare", 00:20:29.244 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:29.244 "is_configured": true, 00:20:29.244 "data_offset": 2048, 00:20:29.244 "data_size": 63488 00:20:29.244 }, 00:20:29.244 { 00:20:29.244 "name": null, 00:20:29.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.244 "is_configured": false, 00:20:29.244 "data_offset": 2048, 00:20:29.244 "data_size": 63488 00:20:29.244 }, 00:20:29.244 { 00:20:29.244 "name": "BaseBdev3", 00:20:29.244 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:29.244 "is_configured": true, 00:20:29.244 "data_offset": 2048, 00:20:29.244 "data_size": 63488 00:20:29.244 }, 00:20:29.244 { 00:20:29.244 "name": "BaseBdev4", 00:20:29.244 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:29.244 "is_configured": true, 00:20:29.244 "data_offset": 2048, 00:20:29.244 "data_size": 63488 00:20:29.244 } 00:20:29.244 ] 00:20:29.244 }' 00:20:29.244 14:22:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@657 -- # local timeout=497 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.503 14:22:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.762 14:22:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.762 "name": "raid_bdev1", 00:20:29.762 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:29.762 "strip_size_kb": 0, 00:20:29.762 "state": "online", 00:20:29.762 "raid_level": "raid1", 00:20:29.762 "superblock": true, 00:20:29.762 "num_base_bdevs": 4, 00:20:29.762 "num_base_bdevs_discovered": 3, 00:20:29.762 "num_base_bdevs_operational": 3, 00:20:29.762 "process": { 00:20:29.762 "type": "rebuild", 00:20:29.762 "target": "spare", 00:20:29.762 "progress": { 00:20:29.762 "blocks": 34816, 00:20:29.762 "percent": 54 00:20:29.762 } 00:20:29.762 }, 00:20:29.762 "base_bdevs_list": [ 00:20:29.762 { 00:20:29.762 "name": "spare", 00:20:29.762 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:29.762 "is_configured": true, 00:20:29.762 "data_offset": 2048, 00:20:29.762 "data_size": 63488 00:20:29.762 }, 00:20:29.762 { 00:20:29.762 "name": null, 00:20:29.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.762 "is_configured": false, 00:20:29.762 "data_offset": 2048, 00:20:29.762 "data_size": 63488 00:20:29.762 }, 00:20:29.762 { 00:20:29.762 "name": "BaseBdev3", 00:20:29.763 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:29.763 "is_configured": true, 00:20:29.763 "data_offset": 2048, 00:20:29.763 "data_size": 63488 00:20:29.763 }, 00:20:29.763 { 00:20:29.763 "name": "BaseBdev4", 00:20:29.763 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:29.763 "is_configured": true, 00:20:29.763 "data_offset": 2048, 00:20:29.763 "data_size": 63488 00:20:29.763 } 00:20:29.763 ] 00:20:29.763 }' 00:20:29.763 14:22:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.763 14:22:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.763 14:22:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.763 14:22:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.763 14:22:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.699 14:22:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.958 [2024-11-18 14:22:22.906460] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:30.958 14:22:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:30.958 "name": "raid_bdev1", 00:20:30.958 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:30.958 "strip_size_kb": 0, 00:20:30.958 "state": "online", 00:20:30.958 "raid_level": "raid1", 00:20:30.958 "superblock": true, 00:20:30.958 "num_base_bdevs": 4, 00:20:30.958 "num_base_bdevs_discovered": 3, 00:20:30.958 "num_base_bdevs_operational": 3, 00:20:30.958 "process": { 00:20:30.958 "type": "rebuild", 00:20:30.958 "target": "spare", 00:20:30.958 "progress": { 00:20:30.958 "blocks": 59392, 00:20:30.958 "percent": 93 00:20:30.958 } 00:20:30.958 }, 00:20:30.958 "base_bdevs_list": [ 00:20:30.958 { 00:20:30.958 "name": "spare", 00:20:30.958 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:30.958 "is_configured": true, 00:20:30.958 "data_offset": 2048, 00:20:30.958 "data_size": 63488 00:20:30.958 }, 00:20:30.958 { 00:20:30.958 "name": null, 00:20:30.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.958 "is_configured": false, 00:20:30.958 "data_offset": 2048, 00:20:30.958 "data_size": 63488 00:20:30.958 }, 00:20:30.958 { 00:20:30.958 "name": "BaseBdev3", 00:20:30.958 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:30.958 "is_configured": true, 00:20:30.958 "data_offset": 2048, 00:20:30.958 "data_size": 63488 00:20:30.958 }, 00:20:30.958 { 00:20:30.958 "name": "BaseBdev4", 00:20:30.958 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:30.958 "is_configured": true, 00:20:30.958 "data_offset": 2048, 00:20:30.958 "data_size": 63488 00:20:30.958 } 00:20:30.958 ] 00:20:30.958 }' 00:20:30.958 14:22:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:30.958 14:22:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.958 14:22:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:31.217 14:22:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.217 14:22:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:31.217 [2024-11-18 14:22:23.137616] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:31.217 [2024-11-18 14:22:23.243393] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:31.217 [2024-11-18 14:22:23.245046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.155 14:22:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.414 "name": "raid_bdev1", 00:20:32.414 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:32.414 "strip_size_kb": 0, 00:20:32.414 "state": "online", 00:20:32.414 "raid_level": "raid1", 00:20:32.414 "superblock": true, 00:20:32.414 "num_base_bdevs": 4, 00:20:32.414 "num_base_bdevs_discovered": 3, 00:20:32.414 "num_base_bdevs_operational": 3, 00:20:32.414 "base_bdevs_list": [ 00:20:32.414 { 00:20:32.414 "name": "spare", 00:20:32.414 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:32.414 "is_configured": true, 00:20:32.414 "data_offset": 2048, 00:20:32.414 "data_size": 63488 00:20:32.414 }, 00:20:32.414 { 00:20:32.414 "name": null, 00:20:32.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.414 "is_configured": false, 00:20:32.414 "data_offset": 2048, 00:20:32.414 "data_size": 63488 00:20:32.414 }, 00:20:32.414 { 00:20:32.414 "name": "BaseBdev3", 00:20:32.414 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:32.414 "is_configured": true, 00:20:32.414 "data_offset": 2048, 00:20:32.414 "data_size": 63488 00:20:32.414 }, 00:20:32.414 { 00:20:32.414 "name": "BaseBdev4", 00:20:32.414 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:32.414 "is_configured": true, 00:20:32.414 "data_offset": 2048, 00:20:32.414 "data_size": 63488 00:20:32.414 } 00:20:32.414 ] 00:20:32.414 }' 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@660 -- # break 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.414 14:22:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.673 "name": "raid_bdev1", 00:20:32.673 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:32.673 "strip_size_kb": 0, 00:20:32.673 "state": "online", 00:20:32.673 "raid_level": "raid1", 00:20:32.673 "superblock": true, 00:20:32.673 "num_base_bdevs": 4, 00:20:32.673 "num_base_bdevs_discovered": 3, 00:20:32.673 "num_base_bdevs_operational": 3, 00:20:32.673 "base_bdevs_list": [ 00:20:32.673 { 00:20:32.673 "name": "spare", 00:20:32.673 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:32.673 "is_configured": true, 00:20:32.673 "data_offset": 2048, 00:20:32.673 "data_size": 63488 00:20:32.673 }, 00:20:32.673 { 00:20:32.673 "name": null, 00:20:32.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.673 "is_configured": false, 00:20:32.673 "data_offset": 2048, 00:20:32.673 "data_size": 63488 00:20:32.673 }, 00:20:32.673 { 00:20:32.673 "name": "BaseBdev3", 00:20:32.673 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:32.673 "is_configured": true, 00:20:32.673 "data_offset": 2048, 00:20:32.673 "data_size": 63488 00:20:32.673 }, 00:20:32.673 { 00:20:32.673 "name": "BaseBdev4", 00:20:32.673 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:32.673 "is_configured": true, 00:20:32.673 "data_offset": 2048, 00:20:32.673 "data_size": 63488 00:20:32.673 } 00:20:32.673 ] 00:20:32.673 }' 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.673 14:22:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.932 14:22:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.932 "name": "raid_bdev1", 00:20:32.932 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:32.932 "strip_size_kb": 0, 00:20:32.932 "state": "online", 00:20:32.932 "raid_level": "raid1", 00:20:32.932 "superblock": true, 00:20:32.932 "num_base_bdevs": 4, 00:20:32.932 "num_base_bdevs_discovered": 3, 00:20:32.932 "num_base_bdevs_operational": 3, 00:20:32.932 "base_bdevs_list": [ 00:20:32.932 { 00:20:32.932 "name": "spare", 00:20:32.932 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:32.932 "is_configured": true, 00:20:32.932 "data_offset": 2048, 00:20:32.932 "data_size": 63488 00:20:32.932 }, 00:20:32.932 { 00:20:32.932 "name": null, 00:20:32.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.932 "is_configured": false, 00:20:32.932 "data_offset": 2048, 00:20:32.932 "data_size": 63488 00:20:32.932 }, 00:20:32.932 { 00:20:32.932 "name": "BaseBdev3", 00:20:32.932 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:32.932 "is_configured": true, 00:20:32.932 "data_offset": 2048, 00:20:32.932 "data_size": 63488 00:20:32.932 }, 00:20:32.932 { 00:20:32.932 "name": "BaseBdev4", 00:20:32.932 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:32.932 "is_configured": true, 00:20:32.932 "data_offset": 2048, 00:20:32.932 "data_size": 63488 00:20:32.932 } 00:20:32.932 ] 00:20:32.932 }' 00:20:32.932 14:22:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.932 14:22:24 -- common/autotest_common.sh@10 -- # set +x 00:20:33.893 14:22:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:33.893 [2024-11-18 14:22:25.848089] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:33.893 [2024-11-18 14:22:25.848148] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.893 00:20:33.893 Latency(us) 00:20:33.893 [2024-11-18T14:22:25.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.893 [2024-11-18T14:22:25.967Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:33.893 raid_bdev1 : 11.13 108.67 326.00 0.00 0.00 13268.84 286.72 114866.73 00:20:33.893 [2024-11-18T14:22:25.967Z] =================================================================================================================== 00:20:33.893 [2024-11-18T14:22:25.967Z] Total : 108.67 326.00 0.00 0.00 13268.84 286.72 114866.73 00:20:33.893 [2024-11-18 14:22:25.951464] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.893 [2024-11-18 14:22:25.951514] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.893 0 00:20:33.893 [2024-11-18 14:22:25.951627] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.893 [2024-11-18 14:22:25.951641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:34.172 14:22:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.172 14:22:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:34.172 14:22:26 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:34.172 14:22:26 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:34.172 14:22:26 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@12 -- # local i 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.172 14:22:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:34.445 /dev/nbd0 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:34.445 14:22:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:34.445 14:22:26 -- common/autotest_common.sh@867 -- # local i 00:20:34.445 14:22:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:34.445 14:22:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:34.445 14:22:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:34.445 14:22:26 -- common/autotest_common.sh@871 -- # break 00:20:34.445 14:22:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:34.445 14:22:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:34.445 14:22:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.445 1+0 records in 00:20:34.445 1+0 records out 00:20:34.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551421 s, 7.4 MB/s 00:20:34.445 14:22:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.445 14:22:26 -- common/autotest_common.sh@884 -- # size=4096 00:20:34.445 14:22:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.445 14:22:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:34.445 14:22:26 -- common/autotest_common.sh@887 -- # return 0 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.445 14:22:26 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:34.445 14:22:26 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:20:34.445 14:22:26 -- bdev/bdev_raid.sh@678 -- # continue 00:20:34.445 14:22:26 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:34.445 14:22:26 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:20:34.445 14:22:26 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@12 -- # local i 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.445 14:22:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:34.704 /dev/nbd1 00:20:34.704 14:22:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.704 14:22:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.704 14:22:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:34.704 14:22:26 -- common/autotest_common.sh@867 -- # local i 00:20:34.704 14:22:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:34.704 14:22:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:34.704 14:22:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:34.704 14:22:26 -- common/autotest_common.sh@871 -- # break 00:20:34.704 14:22:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:34.704 14:22:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:34.704 14:22:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.704 1+0 records in 00:20:34.704 1+0 records out 00:20:34.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429017 s, 9.5 MB/s 00:20:34.704 14:22:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.704 14:22:26 -- common/autotest_common.sh@884 -- # size=4096 00:20:34.704 14:22:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.704 14:22:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:34.704 14:22:26 -- common/autotest_common.sh@887 -- # return 0 00:20:34.704 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.704 14:22:26 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.704 14:22:26 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:34.963 14:22:26 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:34.963 14:22:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.963 14:22:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:34.963 14:22:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:34.963 14:22:26 -- bdev/nbd_common.sh@51 -- # local i 00:20:34.963 14:22:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:34.963 14:22:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:35.222 14:22:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:35.222 14:22:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:35.222 14:22:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@41 -- # break 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.223 14:22:27 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:35.223 14:22:27 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:20:35.223 14:22:27 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@12 -- # local i 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:35.223 14:22:27 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:35.482 /dev/nbd1 00:20:35.482 14:22:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:35.482 14:22:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:35.482 14:22:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:35.482 14:22:27 -- common/autotest_common.sh@867 -- # local i 00:20:35.482 14:22:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:35.482 14:22:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:35.482 14:22:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:35.482 14:22:27 -- common/autotest_common.sh@871 -- # break 00:20:35.482 14:22:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:35.482 14:22:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:35.482 14:22:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.482 1+0 records in 00:20:35.482 1+0 records out 00:20:35.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562294 s, 7.3 MB/s 00:20:35.482 14:22:27 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.482 14:22:27 -- common/autotest_common.sh@884 -- # size=4096 00:20:35.482 14:22:27 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.482 14:22:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:35.482 14:22:27 -- common/autotest_common.sh@887 -- # return 0 00:20:35.482 14:22:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.482 14:22:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:35.482 14:22:27 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:35.482 14:22:27 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:35.483 14:22:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.483 14:22:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:35.483 14:22:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.483 14:22:27 -- bdev/nbd_common.sh@51 -- # local i 00:20:35.483 14:22:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.483 14:22:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@41 -- # break 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.741 14:22:27 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@51 -- # local i 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.741 14:22:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@41 -- # break 00:20:36.000 14:22:27 -- bdev/nbd_common.sh@45 -- # return 0 00:20:36.000 14:22:27 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:36.000 14:22:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:36.000 14:22:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:36.000 14:22:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:36.259 14:22:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:36.518 [2024-11-18 14:22:28.346178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:36.518 [2024-11-18 14:22:28.346277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.518 [2024-11-18 14:22:28.346318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:36.518 [2024-11-18 14:22:28.346341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.518 [2024-11-18 14:22:28.348689] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.518 [2024-11-18 14:22:28.348751] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:36.518 [2024-11-18 14:22:28.348837] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:36.518 [2024-11-18 14:22:28.348887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.518 BaseBdev1 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@696 -- # continue 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:20:36.518 14:22:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:36.777 [2024-11-18 14:22:28.782273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:36.777 [2024-11-18 14:22:28.782326] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.777 [2024-11-18 14:22:28.782360] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:36.777 [2024-11-18 14:22:28.782383] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.777 [2024-11-18 14:22:28.782728] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.777 [2024-11-18 14:22:28.782788] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:36.777 [2024-11-18 14:22:28.782850] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:20:36.777 [2024-11-18 14:22:28.782863] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:20:36.777 [2024-11-18 14:22:28.782870] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.777 [2024-11-18 14:22:28.782899] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:20:36.777 [2024-11-18 14:22:28.782943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:36.777 BaseBdev3 00:20:36.777 14:22:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:36.777 14:22:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:20:36.777 14:22:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:20:37.036 14:22:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:37.298 [2024-11-18 14:22:29.210405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:37.298 [2024-11-18 14:22:29.210469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.298 [2024-11-18 14:22:29.210504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:37.298 [2024-11-18 14:22:29.210531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.298 [2024-11-18 14:22:29.210879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.298 [2024-11-18 14:22:29.210939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:37.298 [2024-11-18 14:22:29.211003] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:20:37.298 [2024-11-18 14:22:29.211032] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:37.298 BaseBdev4 00:20:37.299 14:22:29 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:37.558 [2024-11-18 14:22:29.590536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:37.558 [2024-11-18 14:22:29.590639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.558 [2024-11-18 14:22:29.590669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:37.558 [2024-11-18 14:22:29.590696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.558 [2024-11-18 14:22:29.591061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.558 [2024-11-18 14:22:29.591122] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:37.558 [2024-11-18 14:22:29.591225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:37.558 [2024-11-18 14:22:29.591259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:37.558 spare 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.558 14:22:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.817 [2024-11-18 14:22:29.691372] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:20:37.817 [2024-11-18 14:22:29.691393] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:37.817 [2024-11-18 14:22:29.691537] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033bc0 00:20:37.817 [2024-11-18 14:22:29.691977] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:20:37.817 [2024-11-18 14:22:29.691998] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:20:37.817 [2024-11-18 14:22:29.692113] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.817 14:22:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.817 "name": "raid_bdev1", 00:20:37.817 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:37.817 "strip_size_kb": 0, 00:20:37.817 "state": "online", 00:20:37.817 "raid_level": "raid1", 00:20:37.817 "superblock": true, 00:20:37.817 "num_base_bdevs": 4, 00:20:37.817 "num_base_bdevs_discovered": 3, 00:20:37.817 "num_base_bdevs_operational": 3, 00:20:37.817 "base_bdevs_list": [ 00:20:37.817 { 00:20:37.817 "name": "spare", 00:20:37.817 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:37.817 "is_configured": true, 00:20:37.817 "data_offset": 2048, 00:20:37.817 "data_size": 63488 00:20:37.817 }, 00:20:37.817 { 00:20:37.817 "name": null, 00:20:37.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.817 "is_configured": false, 00:20:37.817 "data_offset": 2048, 00:20:37.817 "data_size": 63488 00:20:37.817 }, 00:20:37.817 { 00:20:37.817 "name": "BaseBdev3", 00:20:37.817 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:37.817 "is_configured": true, 00:20:37.817 "data_offset": 2048, 00:20:37.817 "data_size": 63488 00:20:37.817 }, 00:20:37.817 { 00:20:37.817 "name": "BaseBdev4", 00:20:37.817 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:37.817 "is_configured": true, 00:20:37.817 "data_offset": 2048, 00:20:37.817 "data_size": 63488 00:20:37.817 } 00:20:37.817 ] 00:20:37.817 }' 00:20:37.817 14:22:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.817 14:22:29 -- common/autotest_common.sh@10 -- # set +x 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.386 14:22:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.645 14:22:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:38.645 "name": "raid_bdev1", 00:20:38.645 "uuid": "9e4f0df2-ffd1-4cfb-b835-6c120fbce39a", 00:20:38.645 "strip_size_kb": 0, 00:20:38.645 "state": "online", 00:20:38.645 "raid_level": "raid1", 00:20:38.645 "superblock": true, 00:20:38.645 "num_base_bdevs": 4, 00:20:38.645 "num_base_bdevs_discovered": 3, 00:20:38.645 "num_base_bdevs_operational": 3, 00:20:38.645 "base_bdevs_list": [ 00:20:38.645 { 00:20:38.645 "name": "spare", 00:20:38.645 "uuid": "0dc8ead8-66f8-5aaf-9981-4d669f36b216", 00:20:38.645 "is_configured": true, 00:20:38.645 "data_offset": 2048, 00:20:38.645 "data_size": 63488 00:20:38.645 }, 00:20:38.645 { 00:20:38.645 "name": null, 00:20:38.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.645 "is_configured": false, 00:20:38.645 "data_offset": 2048, 00:20:38.645 "data_size": 63488 00:20:38.645 }, 00:20:38.645 { 00:20:38.645 "name": "BaseBdev3", 00:20:38.645 "uuid": "540776a4-bc8d-55f2-a9f3-c3732b8e8867", 00:20:38.645 "is_configured": true, 00:20:38.645 "data_offset": 2048, 00:20:38.645 "data_size": 63488 00:20:38.645 }, 00:20:38.645 { 00:20:38.645 "name": "BaseBdev4", 00:20:38.645 "uuid": "d967030d-5a46-5593-aaf3-1e1eece6e44a", 00:20:38.645 "is_configured": true, 00:20:38.645 "data_offset": 2048, 00:20:38.645 "data_size": 63488 00:20:38.646 } 00:20:38.646 ] 00:20:38.646 }' 00:20:38.646 14:22:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:38.646 14:22:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:38.646 14:22:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:38.646 14:22:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:38.646 14:22:30 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.646 14:22:30 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:38.905 14:22:30 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.905 14:22:30 -- bdev/bdev_raid.sh@709 -- # killprocess 136214 00:20:38.905 14:22:30 -- common/autotest_common.sh@936 -- # '[' -z 136214 ']' 00:20:38.905 14:22:30 -- common/autotest_common.sh@940 -- # kill -0 136214 00:20:38.905 14:22:30 -- common/autotest_common.sh@941 -- # uname 00:20:38.905 14:22:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.905 14:22:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136214 00:20:38.905 14:22:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:38.905 14:22:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:38.905 14:22:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136214' 00:20:38.905 killing process with pid 136214 00:20:38.905 14:22:30 -- common/autotest_common.sh@955 -- # kill 136214 00:20:38.905 Received shutdown signal, test time was about 16.127668 seconds 00:20:38.905 00:20:38.905 Latency(us) 00:20:38.905 [2024-11-18T14:22:30.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.905 [2024-11-18T14:22:30.979Z] =================================================================================================================== 00:20:38.905 [2024-11-18T14:22:30.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.905 [2024-11-18 14:22:30.949945] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:38.905 [2024-11-18 14:22:30.950021] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:38.905 [2024-11-18 14:22:30.950091] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:38.905 [2024-11-18 14:22:30.950105] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:20:38.905 14:22:30 -- common/autotest_common.sh@960 -- # wait 136214 00:20:39.164 [2024-11-18 14:22:31.002425] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:39.423 00:20:39.423 real 0m21.295s 00:20:39.423 user 0m35.007s 00:20:39.423 sys 0m2.518s 00:20:39.423 14:22:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:39.423 14:22:31 -- common/autotest_common.sh@10 -- # set +x 00:20:39.423 ************************************ 00:20:39.423 END TEST raid_rebuild_test_sb_io 00:20:39.423 ************************************ 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:20:39.423 14:22:31 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:39.423 14:22:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:39.423 14:22:31 -- common/autotest_common.sh@10 -- # set +x 00:20:39.423 ************************************ 00:20:39.423 START TEST raid5f_state_function_test 00:20:39.423 ************************************ 00:20:39.423 14:22:31 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=136812 00:20:39.423 Process raid pid: 136812 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 136812' 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 136812 /var/tmp/spdk-raid.sock 00:20:39.423 14:22:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:39.423 14:22:31 -- common/autotest_common.sh@829 -- # '[' -z 136812 ']' 00:20:39.423 14:22:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:39.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:39.423 14:22:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.423 14:22:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:39.423 14:22:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.423 14:22:31 -- common/autotest_common.sh@10 -- # set +x 00:20:39.423 [2024-11-18 14:22:31.429131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:39.424 [2024-11-18 14:22:31.429315] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.683 [2024-11-18 14:22:31.568269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.683 [2024-11-18 14:22:31.640775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.683 [2024-11-18 14:22:31.710483] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.619 14:22:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.619 14:22:32 -- common/autotest_common.sh@862 -- # return 0 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:40.619 [2024-11-18 14:22:32.640661] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:40.619 [2024-11-18 14:22:32.640764] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:40.619 [2024-11-18 14:22:32.640777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:40.619 [2024-11-18 14:22:32.640797] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:40.619 [2024-11-18 14:22:32.640804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:40.619 [2024-11-18 14:22:32.640843] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.619 14:22:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.878 14:22:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.878 "name": "Existed_Raid", 00:20:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.878 "strip_size_kb": 64, 00:20:40.878 "state": "configuring", 00:20:40.878 "raid_level": "raid5f", 00:20:40.878 "superblock": false, 00:20:40.878 "num_base_bdevs": 3, 00:20:40.878 "num_base_bdevs_discovered": 0, 00:20:40.878 "num_base_bdevs_operational": 3, 00:20:40.878 "base_bdevs_list": [ 00:20:40.878 { 00:20:40.878 "name": "BaseBdev1", 00:20:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.878 "is_configured": false, 00:20:40.878 "data_offset": 0, 00:20:40.878 "data_size": 0 00:20:40.878 }, 00:20:40.878 { 00:20:40.878 "name": "BaseBdev2", 00:20:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.878 "is_configured": false, 00:20:40.878 "data_offset": 0, 00:20:40.878 "data_size": 0 00:20:40.878 }, 00:20:40.878 { 00:20:40.878 "name": "BaseBdev3", 00:20:40.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.878 "is_configured": false, 00:20:40.878 "data_offset": 0, 00:20:40.878 "data_size": 0 00:20:40.878 } 00:20:40.878 ] 00:20:40.878 }' 00:20:40.878 14:22:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.878 14:22:32 -- common/autotest_common.sh@10 -- # set +x 00:20:41.446 14:22:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:41.705 [2024-11-18 14:22:33.656676] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:41.705 [2024-11-18 14:22:33.656706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:41.705 14:22:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:41.964 [2024-11-18 14:22:33.896720] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.964 [2024-11-18 14:22:33.896770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.964 [2024-11-18 14:22:33.896780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.964 [2024-11-18 14:22:33.896808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.964 [2024-11-18 14:22:33.896815] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:41.964 [2024-11-18 14:22:33.896840] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:41.964 14:22:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:42.223 [2024-11-18 14:22:34.094713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:42.223 BaseBdev1 00:20:42.223 14:22:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:42.223 14:22:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:42.223 14:22:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:42.223 14:22:34 -- common/autotest_common.sh@899 -- # local i 00:20:42.223 14:22:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:42.223 14:22:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:42.223 14:22:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.223 14:22:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:42.482 [ 00:20:42.482 { 00:20:42.482 "name": "BaseBdev1", 00:20:42.482 "aliases": [ 00:20:42.482 "1d2bc3c1-b969-4ae5-8631-5fc6f3c69691" 00:20:42.482 ], 00:20:42.482 "product_name": "Malloc disk", 00:20:42.482 "block_size": 512, 00:20:42.482 "num_blocks": 65536, 00:20:42.482 "uuid": "1d2bc3c1-b969-4ae5-8631-5fc6f3c69691", 00:20:42.482 "assigned_rate_limits": { 00:20:42.482 "rw_ios_per_sec": 0, 00:20:42.482 "rw_mbytes_per_sec": 0, 00:20:42.482 "r_mbytes_per_sec": 0, 00:20:42.482 "w_mbytes_per_sec": 0 00:20:42.482 }, 00:20:42.482 "claimed": true, 00:20:42.482 "claim_type": "exclusive_write", 00:20:42.482 "zoned": false, 00:20:42.482 "supported_io_types": { 00:20:42.482 "read": true, 00:20:42.482 "write": true, 00:20:42.482 "unmap": true, 00:20:42.482 "write_zeroes": true, 00:20:42.482 "flush": true, 00:20:42.482 "reset": true, 00:20:42.482 "compare": false, 00:20:42.482 "compare_and_write": false, 00:20:42.482 "abort": true, 00:20:42.482 "nvme_admin": false, 00:20:42.482 "nvme_io": false 00:20:42.482 }, 00:20:42.482 "memory_domains": [ 00:20:42.482 { 00:20:42.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.482 "dma_device_type": 2 00:20:42.482 } 00:20:42.482 ], 00:20:42.482 "driver_specific": {} 00:20:42.482 } 00:20:42.482 ] 00:20:42.482 14:22:34 -- common/autotest_common.sh@905 -- # return 0 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.482 14:22:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.740 14:22:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.741 "name": "Existed_Raid", 00:20:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.741 "strip_size_kb": 64, 00:20:42.741 "state": "configuring", 00:20:42.741 "raid_level": "raid5f", 00:20:42.741 "superblock": false, 00:20:42.741 "num_base_bdevs": 3, 00:20:42.741 "num_base_bdevs_discovered": 1, 00:20:42.741 "num_base_bdevs_operational": 3, 00:20:42.741 "base_bdevs_list": [ 00:20:42.741 { 00:20:42.741 "name": "BaseBdev1", 00:20:42.741 "uuid": "1d2bc3c1-b969-4ae5-8631-5fc6f3c69691", 00:20:42.741 "is_configured": true, 00:20:42.741 "data_offset": 0, 00:20:42.741 "data_size": 65536 00:20:42.741 }, 00:20:42.741 { 00:20:42.741 "name": "BaseBdev2", 00:20:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.741 "is_configured": false, 00:20:42.741 "data_offset": 0, 00:20:42.741 "data_size": 0 00:20:42.741 }, 00:20:42.741 { 00:20:42.741 "name": "BaseBdev3", 00:20:42.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.741 "is_configured": false, 00:20:42.741 "data_offset": 0, 00:20:42.741 "data_size": 0 00:20:42.741 } 00:20:42.741 ] 00:20:42.741 }' 00:20:42.741 14:22:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.741 14:22:34 -- common/autotest_common.sh@10 -- # set +x 00:20:43.308 14:22:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:43.567 [2024-11-18 14:22:35.494915] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.567 [2024-11-18 14:22:35.494954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:43.567 14:22:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:43.567 14:22:35 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:43.826 [2024-11-18 14:22:35.759020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.826 [2024-11-18 14:22:35.761001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.826 [2024-11-18 14:22:35.761054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.826 [2024-11-18 14:22:35.761064] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.826 [2024-11-18 14:22:35.761089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.826 14:22:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.083 14:22:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.083 "name": "Existed_Raid", 00:20:44.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.083 "strip_size_kb": 64, 00:20:44.083 "state": "configuring", 00:20:44.083 "raid_level": "raid5f", 00:20:44.083 "superblock": false, 00:20:44.083 "num_base_bdevs": 3, 00:20:44.083 "num_base_bdevs_discovered": 1, 00:20:44.083 "num_base_bdevs_operational": 3, 00:20:44.083 "base_bdevs_list": [ 00:20:44.083 { 00:20:44.083 "name": "BaseBdev1", 00:20:44.083 "uuid": "1d2bc3c1-b969-4ae5-8631-5fc6f3c69691", 00:20:44.083 "is_configured": true, 00:20:44.083 "data_offset": 0, 00:20:44.083 "data_size": 65536 00:20:44.083 }, 00:20:44.083 { 00:20:44.083 "name": "BaseBdev2", 00:20:44.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.083 "is_configured": false, 00:20:44.083 "data_offset": 0, 00:20:44.083 "data_size": 0 00:20:44.083 }, 00:20:44.083 { 00:20:44.083 "name": "BaseBdev3", 00:20:44.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.083 "is_configured": false, 00:20:44.083 "data_offset": 0, 00:20:44.083 "data_size": 0 00:20:44.083 } 00:20:44.083 ] 00:20:44.083 }' 00:20:44.083 14:22:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.083 14:22:35 -- common/autotest_common.sh@10 -- # set +x 00:20:44.651 14:22:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:44.651 [2024-11-18 14:22:36.719752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.651 BaseBdev2 00:20:44.910 14:22:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:44.910 14:22:36 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:44.910 14:22:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:44.910 14:22:36 -- common/autotest_common.sh@899 -- # local i 00:20:44.910 14:22:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:44.910 14:22:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:44.910 14:22:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.169 14:22:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:45.169 [ 00:20:45.169 { 00:20:45.169 "name": "BaseBdev2", 00:20:45.169 "aliases": [ 00:20:45.169 "1e124509-8a4e-460b-b2fe-c89d6bbff0df" 00:20:45.169 ], 00:20:45.169 "product_name": "Malloc disk", 00:20:45.169 "block_size": 512, 00:20:45.169 "num_blocks": 65536, 00:20:45.169 "uuid": "1e124509-8a4e-460b-b2fe-c89d6bbff0df", 00:20:45.169 "assigned_rate_limits": { 00:20:45.169 "rw_ios_per_sec": 0, 00:20:45.169 "rw_mbytes_per_sec": 0, 00:20:45.169 "r_mbytes_per_sec": 0, 00:20:45.169 "w_mbytes_per_sec": 0 00:20:45.169 }, 00:20:45.169 "claimed": true, 00:20:45.169 "claim_type": "exclusive_write", 00:20:45.169 "zoned": false, 00:20:45.169 "supported_io_types": { 00:20:45.169 "read": true, 00:20:45.169 "write": true, 00:20:45.169 "unmap": true, 00:20:45.169 "write_zeroes": true, 00:20:45.169 "flush": true, 00:20:45.169 "reset": true, 00:20:45.169 "compare": false, 00:20:45.169 "compare_and_write": false, 00:20:45.169 "abort": true, 00:20:45.169 "nvme_admin": false, 00:20:45.169 "nvme_io": false 00:20:45.169 }, 00:20:45.169 "memory_domains": [ 00:20:45.169 { 00:20:45.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.169 "dma_device_type": 2 00:20:45.169 } 00:20:45.169 ], 00:20:45.169 "driver_specific": {} 00:20:45.169 } 00:20:45.169 ] 00:20:45.169 14:22:37 -- common/autotest_common.sh@905 -- # return 0 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.169 14:22:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.428 14:22:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.428 "name": "Existed_Raid", 00:20:45.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.428 "strip_size_kb": 64, 00:20:45.428 "state": "configuring", 00:20:45.428 "raid_level": "raid5f", 00:20:45.428 "superblock": false, 00:20:45.428 "num_base_bdevs": 3, 00:20:45.428 "num_base_bdevs_discovered": 2, 00:20:45.428 "num_base_bdevs_operational": 3, 00:20:45.428 "base_bdevs_list": [ 00:20:45.428 { 00:20:45.428 "name": "BaseBdev1", 00:20:45.428 "uuid": "1d2bc3c1-b969-4ae5-8631-5fc6f3c69691", 00:20:45.428 "is_configured": true, 00:20:45.428 "data_offset": 0, 00:20:45.428 "data_size": 65536 00:20:45.428 }, 00:20:45.428 { 00:20:45.428 "name": "BaseBdev2", 00:20:45.428 "uuid": "1e124509-8a4e-460b-b2fe-c89d6bbff0df", 00:20:45.428 "is_configured": true, 00:20:45.428 "data_offset": 0, 00:20:45.428 "data_size": 65536 00:20:45.428 }, 00:20:45.428 { 00:20:45.428 "name": "BaseBdev3", 00:20:45.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.428 "is_configured": false, 00:20:45.428 "data_offset": 0, 00:20:45.428 "data_size": 0 00:20:45.428 } 00:20:45.429 ] 00:20:45.429 }' 00:20:45.429 14:22:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.429 14:22:37 -- common/autotest_common.sh@10 -- # set +x 00:20:45.995 14:22:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:46.254 [2024-11-18 14:22:38.303591] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.254 [2024-11-18 14:22:38.303665] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:20:46.254 [2024-11-18 14:22:38.303678] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:46.254 [2024-11-18 14:22:38.303809] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:20:46.254 [2024-11-18 14:22:38.304567] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:20:46.254 [2024-11-18 14:22:38.304590] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:20:46.254 [2024-11-18 14:22:38.304803] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.254 BaseBdev3 00:20:46.254 14:22:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:46.254 14:22:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:46.254 14:22:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:46.254 14:22:38 -- common/autotest_common.sh@899 -- # local i 00:20:46.254 14:22:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:46.254 14:22:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:46.254 14:22:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.513 14:22:38 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.772 [ 00:20:46.772 { 00:20:46.772 "name": "BaseBdev3", 00:20:46.772 "aliases": [ 00:20:46.772 "3de8d8c1-ec2f-4ebc-8948-4d85760d6e03" 00:20:46.772 ], 00:20:46.772 "product_name": "Malloc disk", 00:20:46.772 "block_size": 512, 00:20:46.772 "num_blocks": 65536, 00:20:46.772 "uuid": "3de8d8c1-ec2f-4ebc-8948-4d85760d6e03", 00:20:46.772 "assigned_rate_limits": { 00:20:46.772 "rw_ios_per_sec": 0, 00:20:46.772 "rw_mbytes_per_sec": 0, 00:20:46.772 "r_mbytes_per_sec": 0, 00:20:46.772 "w_mbytes_per_sec": 0 00:20:46.772 }, 00:20:46.772 "claimed": true, 00:20:46.772 "claim_type": "exclusive_write", 00:20:46.772 "zoned": false, 00:20:46.772 "supported_io_types": { 00:20:46.772 "read": true, 00:20:46.772 "write": true, 00:20:46.772 "unmap": true, 00:20:46.772 "write_zeroes": true, 00:20:46.772 "flush": true, 00:20:46.772 "reset": true, 00:20:46.772 "compare": false, 00:20:46.772 "compare_and_write": false, 00:20:46.772 "abort": true, 00:20:46.772 "nvme_admin": false, 00:20:46.772 "nvme_io": false 00:20:46.772 }, 00:20:46.772 "memory_domains": [ 00:20:46.772 { 00:20:46.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.772 "dma_device_type": 2 00:20:46.772 } 00:20:46.772 ], 00:20:46.772 "driver_specific": {} 00:20:46.772 } 00:20:46.772 ] 00:20:46.772 14:22:38 -- common/autotest_common.sh@905 -- # return 0 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.772 14:22:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.030 14:22:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:47.030 "name": "Existed_Raid", 00:20:47.030 "uuid": "b7de5bf9-d222-4d83-b8d1-45b0ed0e0a0e", 00:20:47.030 "strip_size_kb": 64, 00:20:47.030 "state": "online", 00:20:47.030 "raid_level": "raid5f", 00:20:47.030 "superblock": false, 00:20:47.030 "num_base_bdevs": 3, 00:20:47.030 "num_base_bdevs_discovered": 3, 00:20:47.030 "num_base_bdevs_operational": 3, 00:20:47.030 "base_bdevs_list": [ 00:20:47.030 { 00:20:47.030 "name": "BaseBdev1", 00:20:47.030 "uuid": "1d2bc3c1-b969-4ae5-8631-5fc6f3c69691", 00:20:47.030 "is_configured": true, 00:20:47.030 "data_offset": 0, 00:20:47.030 "data_size": 65536 00:20:47.030 }, 00:20:47.030 { 00:20:47.030 "name": "BaseBdev2", 00:20:47.030 "uuid": "1e124509-8a4e-460b-b2fe-c89d6bbff0df", 00:20:47.030 "is_configured": true, 00:20:47.030 "data_offset": 0, 00:20:47.030 "data_size": 65536 00:20:47.030 }, 00:20:47.030 { 00:20:47.030 "name": "BaseBdev3", 00:20:47.030 "uuid": "3de8d8c1-ec2f-4ebc-8948-4d85760d6e03", 00:20:47.030 "is_configured": true, 00:20:47.030 "data_offset": 0, 00:20:47.030 "data_size": 65536 00:20:47.030 } 00:20:47.030 ] 00:20:47.030 }' 00:20:47.030 14:22:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:47.030 14:22:38 -- common/autotest_common.sh@10 -- # set +x 00:20:47.602 14:22:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:47.861 [2024-11-18 14:22:39.735973] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.861 14:22:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.120 14:22:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.120 "name": "Existed_Raid", 00:20:48.120 "uuid": "b7de5bf9-d222-4d83-b8d1-45b0ed0e0a0e", 00:20:48.120 "strip_size_kb": 64, 00:20:48.120 "state": "online", 00:20:48.120 "raid_level": "raid5f", 00:20:48.120 "superblock": false, 00:20:48.120 "num_base_bdevs": 3, 00:20:48.120 "num_base_bdevs_discovered": 2, 00:20:48.120 "num_base_bdevs_operational": 2, 00:20:48.120 "base_bdevs_list": [ 00:20:48.120 { 00:20:48.120 "name": null, 00:20:48.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.120 "is_configured": false, 00:20:48.120 "data_offset": 0, 00:20:48.120 "data_size": 65536 00:20:48.120 }, 00:20:48.120 { 00:20:48.120 "name": "BaseBdev2", 00:20:48.120 "uuid": "1e124509-8a4e-460b-b2fe-c89d6bbff0df", 00:20:48.120 "is_configured": true, 00:20:48.120 "data_offset": 0, 00:20:48.120 "data_size": 65536 00:20:48.120 }, 00:20:48.120 { 00:20:48.120 "name": "BaseBdev3", 00:20:48.120 "uuid": "3de8d8c1-ec2f-4ebc-8948-4d85760d6e03", 00:20:48.120 "is_configured": true, 00:20:48.120 "data_offset": 0, 00:20:48.120 "data_size": 65536 00:20:48.120 } 00:20:48.120 ] 00:20:48.120 }' 00:20:48.120 14:22:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.120 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:20:48.686 14:22:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:48.686 14:22:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:48.687 14:22:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:48.687 14:22:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.945 14:22:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:48.945 14:22:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:48.945 14:22:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:49.205 [2024-11-18 14:22:41.076650] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:49.205 [2024-11-18 14:22:41.076677] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.205 [2024-11-18 14:22:41.076739] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.205 14:22:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:49.205 14:22:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:49.205 14:22:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.205 14:22:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:49.464 14:22:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:49.464 14:22:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:49.464 14:22:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:49.464 [2024-11-18 14:22:41.537467] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:49.464 [2024-11-18 14:22:41.537529] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:49.722 14:22:41 -- bdev/bdev_raid.sh@287 -- # killprocess 136812 00:20:49.722 14:22:41 -- common/autotest_common.sh@936 -- # '[' -z 136812 ']' 00:20:49.722 14:22:41 -- common/autotest_common.sh@940 -- # kill -0 136812 00:20:49.722 14:22:41 -- common/autotest_common.sh@941 -- # uname 00:20:49.722 14:22:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:49.722 14:22:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136812 00:20:49.722 14:22:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:49.722 killing process with pid 136812 00:20:49.722 14:22:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:49.722 14:22:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136812' 00:20:49.722 14:22:41 -- common/autotest_common.sh@955 -- # kill 136812 00:20:49.722 [2024-11-18 14:22:41.758456] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:49.722 14:22:41 -- common/autotest_common.sh@960 -- # wait 136812 00:20:49.722 [2024-11-18 14:22:41.758530] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.981 14:22:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:49.981 00:20:49.981 real 0m10.669s 00:20:49.981 user 0m19.753s 00:20:49.981 sys 0m1.198s 00:20:49.981 14:22:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:49.981 ************************************ 00:20:49.981 END TEST raid5f_state_function_test 00:20:49.981 ************************************ 00:20:49.981 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:50.240 14:22:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:50.240 14:22:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:50.240 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.240 ************************************ 00:20:50.240 START TEST raid5f_state_function_test_sb 00:20:50.240 ************************************ 00:20:50.240 14:22:42 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=137170 00:20:50.240 Process raid pid: 137170 00:20:50.240 14:22:42 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:50.241 14:22:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137170' 00:20:50.241 14:22:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137170 /var/tmp/spdk-raid.sock 00:20:50.241 14:22:42 -- common/autotest_common.sh@829 -- # '[' -z 137170 ']' 00:20:50.241 14:22:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:50.241 14:22:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.241 14:22:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:50.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:50.241 14:22:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.241 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.241 [2024-11-18 14:22:42.155731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:50.241 [2024-11-18 14:22:42.155958] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.241 [2024-11-18 14:22:42.296570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.499 [2024-11-18 14:22:42.361985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.499 [2024-11-18 14:22:42.431832] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.066 14:22:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.066 14:22:43 -- common/autotest_common.sh@862 -- # return 0 00:20:51.066 14:22:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:51.325 [2024-11-18 14:22:43.333938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:51.325 [2024-11-18 14:22:43.334038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:51.325 [2024-11-18 14:22:43.334052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:51.325 [2024-11-18 14:22:43.334071] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:51.325 [2024-11-18 14:22:43.334078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:51.325 [2024-11-18 14:22:43.334117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.325 14:22:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.584 14:22:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.584 "name": "Existed_Raid", 00:20:51.584 "uuid": "115840ac-c77d-449e-9c7e-fdf937a5b362", 00:20:51.584 "strip_size_kb": 64, 00:20:51.584 "state": "configuring", 00:20:51.584 "raid_level": "raid5f", 00:20:51.584 "superblock": true, 00:20:51.584 "num_base_bdevs": 3, 00:20:51.584 "num_base_bdevs_discovered": 0, 00:20:51.584 "num_base_bdevs_operational": 3, 00:20:51.584 "base_bdevs_list": [ 00:20:51.584 { 00:20:51.584 "name": "BaseBdev1", 00:20:51.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.584 "is_configured": false, 00:20:51.584 "data_offset": 0, 00:20:51.584 "data_size": 0 00:20:51.585 }, 00:20:51.585 { 00:20:51.585 "name": "BaseBdev2", 00:20:51.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.585 "is_configured": false, 00:20:51.585 "data_offset": 0, 00:20:51.585 "data_size": 0 00:20:51.585 }, 00:20:51.585 { 00:20:51.585 "name": "BaseBdev3", 00:20:51.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.585 "is_configured": false, 00:20:51.585 "data_offset": 0, 00:20:51.585 "data_size": 0 00:20:51.585 } 00:20:51.585 ] 00:20:51.585 }' 00:20:51.585 14:22:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.585 14:22:43 -- common/autotest_common.sh@10 -- # set +x 00:20:52.152 14:22:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:52.411 [2024-11-18 14:22:44.425946] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:52.411 [2024-11-18 14:22:44.425975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:20:52.411 14:22:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:52.669 [2024-11-18 14:22:44.670006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:52.669 [2024-11-18 14:22:44.670058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:52.669 [2024-11-18 14:22:44.670068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.669 [2024-11-18 14:22:44.670089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.670 [2024-11-18 14:22:44.670096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.670 [2024-11-18 14:22:44.670120] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.670 14:22:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:52.928 [2024-11-18 14:22:44.871970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.928 BaseBdev1 00:20:52.928 14:22:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:52.928 14:22:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:52.928 14:22:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:52.929 14:22:44 -- common/autotest_common.sh@899 -- # local i 00:20:52.929 14:22:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:52.929 14:22:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:52.929 14:22:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:53.188 14:22:45 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:53.188 [ 00:20:53.188 { 00:20:53.188 "name": "BaseBdev1", 00:20:53.188 "aliases": [ 00:20:53.188 "572ebf88-ee6c-4b33-b6ee-7d99fb5eea10" 00:20:53.188 ], 00:20:53.188 "product_name": "Malloc disk", 00:20:53.188 "block_size": 512, 00:20:53.188 "num_blocks": 65536, 00:20:53.188 "uuid": "572ebf88-ee6c-4b33-b6ee-7d99fb5eea10", 00:20:53.188 "assigned_rate_limits": { 00:20:53.188 "rw_ios_per_sec": 0, 00:20:53.188 "rw_mbytes_per_sec": 0, 00:20:53.188 "r_mbytes_per_sec": 0, 00:20:53.188 "w_mbytes_per_sec": 0 00:20:53.188 }, 00:20:53.188 "claimed": true, 00:20:53.188 "claim_type": "exclusive_write", 00:20:53.188 "zoned": false, 00:20:53.188 "supported_io_types": { 00:20:53.188 "read": true, 00:20:53.188 "write": true, 00:20:53.188 "unmap": true, 00:20:53.188 "write_zeroes": true, 00:20:53.188 "flush": true, 00:20:53.188 "reset": true, 00:20:53.188 "compare": false, 00:20:53.188 "compare_and_write": false, 00:20:53.188 "abort": true, 00:20:53.188 "nvme_admin": false, 00:20:53.188 "nvme_io": false 00:20:53.188 }, 00:20:53.188 "memory_domains": [ 00:20:53.188 { 00:20:53.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.188 "dma_device_type": 2 00:20:53.188 } 00:20:53.188 ], 00:20:53.188 "driver_specific": {} 00:20:53.188 } 00:20:53.188 ] 00:20:53.188 14:22:45 -- common/autotest_common.sh@905 -- # return 0 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:53.188 14:22:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:53.189 14:22:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.189 14:22:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.447 14:22:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:53.447 "name": "Existed_Raid", 00:20:53.447 "uuid": "25566909-7590-46aa-b56a-35acffd32fc2", 00:20:53.447 "strip_size_kb": 64, 00:20:53.447 "state": "configuring", 00:20:53.447 "raid_level": "raid5f", 00:20:53.447 "superblock": true, 00:20:53.447 "num_base_bdevs": 3, 00:20:53.447 "num_base_bdevs_discovered": 1, 00:20:53.447 "num_base_bdevs_operational": 3, 00:20:53.447 "base_bdevs_list": [ 00:20:53.447 { 00:20:53.447 "name": "BaseBdev1", 00:20:53.447 "uuid": "572ebf88-ee6c-4b33-b6ee-7d99fb5eea10", 00:20:53.447 "is_configured": true, 00:20:53.447 "data_offset": 2048, 00:20:53.447 "data_size": 63488 00:20:53.447 }, 00:20:53.447 { 00:20:53.447 "name": "BaseBdev2", 00:20:53.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.447 "is_configured": false, 00:20:53.447 "data_offset": 0, 00:20:53.447 "data_size": 0 00:20:53.447 }, 00:20:53.447 { 00:20:53.447 "name": "BaseBdev3", 00:20:53.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.447 "is_configured": false, 00:20:53.447 "data_offset": 0, 00:20:53.447 "data_size": 0 00:20:53.447 } 00:20:53.447 ] 00:20:53.447 }' 00:20:53.447 14:22:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:53.447 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:20:54.381 14:22:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:54.381 [2024-11-18 14:22:46.264184] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:54.381 [2024-11-18 14:22:46.264231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:20:54.381 14:22:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:54.381 14:22:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:54.640 14:22:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:54.899 BaseBdev1 00:20:54.899 14:22:46 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:54.899 14:22:46 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:54.899 14:22:46 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:54.899 14:22:46 -- common/autotest_common.sh@899 -- # local i 00:20:54.899 14:22:46 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:54.899 14:22:46 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:54.899 14:22:46 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.899 14:22:46 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:55.157 [ 00:20:55.157 { 00:20:55.158 "name": "BaseBdev1", 00:20:55.158 "aliases": [ 00:20:55.158 "82ab7cea-82e6-47cc-b86d-105f92ee538e" 00:20:55.158 ], 00:20:55.158 "product_name": "Malloc disk", 00:20:55.158 "block_size": 512, 00:20:55.158 "num_blocks": 65536, 00:20:55.158 "uuid": "82ab7cea-82e6-47cc-b86d-105f92ee538e", 00:20:55.158 "assigned_rate_limits": { 00:20:55.158 "rw_ios_per_sec": 0, 00:20:55.158 "rw_mbytes_per_sec": 0, 00:20:55.158 "r_mbytes_per_sec": 0, 00:20:55.158 "w_mbytes_per_sec": 0 00:20:55.158 }, 00:20:55.158 "claimed": false, 00:20:55.158 "zoned": false, 00:20:55.158 "supported_io_types": { 00:20:55.158 "read": true, 00:20:55.158 "write": true, 00:20:55.158 "unmap": true, 00:20:55.158 "write_zeroes": true, 00:20:55.158 "flush": true, 00:20:55.158 "reset": true, 00:20:55.158 "compare": false, 00:20:55.158 "compare_and_write": false, 00:20:55.158 "abort": true, 00:20:55.158 "nvme_admin": false, 00:20:55.158 "nvme_io": false 00:20:55.158 }, 00:20:55.158 "memory_domains": [ 00:20:55.158 { 00:20:55.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.158 "dma_device_type": 2 00:20:55.158 } 00:20:55.158 ], 00:20:55.158 "driver_specific": {} 00:20:55.158 } 00:20:55.158 ] 00:20:55.158 14:22:47 -- common/autotest_common.sh@905 -- # return 0 00:20:55.158 14:22:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:55.417 [2024-11-18 14:22:47.317997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.417 [2024-11-18 14:22:47.319983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.417 [2024-11-18 14:22:47.320038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.417 [2024-11-18 14:22:47.320049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:55.417 [2024-11-18 14:22:47.320073] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.417 14:22:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.675 14:22:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.675 "name": "Existed_Raid", 00:20:55.675 "uuid": "4438c4e9-89f5-4d7d-8898-b47fe065714b", 00:20:55.675 "strip_size_kb": 64, 00:20:55.676 "state": "configuring", 00:20:55.676 "raid_level": "raid5f", 00:20:55.676 "superblock": true, 00:20:55.676 "num_base_bdevs": 3, 00:20:55.676 "num_base_bdevs_discovered": 1, 00:20:55.676 "num_base_bdevs_operational": 3, 00:20:55.676 "base_bdevs_list": [ 00:20:55.676 { 00:20:55.676 "name": "BaseBdev1", 00:20:55.676 "uuid": "82ab7cea-82e6-47cc-b86d-105f92ee538e", 00:20:55.676 "is_configured": true, 00:20:55.676 "data_offset": 2048, 00:20:55.676 "data_size": 63488 00:20:55.676 }, 00:20:55.676 { 00:20:55.676 "name": "BaseBdev2", 00:20:55.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.676 "is_configured": false, 00:20:55.676 "data_offset": 0, 00:20:55.676 "data_size": 0 00:20:55.676 }, 00:20:55.676 { 00:20:55.676 "name": "BaseBdev3", 00:20:55.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.676 "is_configured": false, 00:20:55.676 "data_offset": 0, 00:20:55.676 "data_size": 0 00:20:55.676 } 00:20:55.676 ] 00:20:55.676 }' 00:20:55.676 14:22:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.676 14:22:47 -- common/autotest_common.sh@10 -- # set +x 00:20:56.243 14:22:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:56.502 [2024-11-18 14:22:48.411717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:56.502 BaseBdev2 00:20:56.502 14:22:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:56.502 14:22:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:56.502 14:22:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:56.502 14:22:48 -- common/autotest_common.sh@899 -- # local i 00:20:56.502 14:22:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:56.502 14:22:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:56.502 14:22:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:56.816 14:22:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:56.816 [ 00:20:56.816 { 00:20:56.816 "name": "BaseBdev2", 00:20:56.816 "aliases": [ 00:20:56.816 "2799e371-ccc5-410f-945a-5979046153ef" 00:20:56.816 ], 00:20:56.816 "product_name": "Malloc disk", 00:20:56.816 "block_size": 512, 00:20:56.816 "num_blocks": 65536, 00:20:56.816 "uuid": "2799e371-ccc5-410f-945a-5979046153ef", 00:20:56.816 "assigned_rate_limits": { 00:20:56.816 "rw_ios_per_sec": 0, 00:20:56.816 "rw_mbytes_per_sec": 0, 00:20:56.816 "r_mbytes_per_sec": 0, 00:20:56.816 "w_mbytes_per_sec": 0 00:20:56.816 }, 00:20:56.816 "claimed": true, 00:20:56.816 "claim_type": "exclusive_write", 00:20:56.816 "zoned": false, 00:20:56.816 "supported_io_types": { 00:20:56.816 "read": true, 00:20:56.816 "write": true, 00:20:56.816 "unmap": true, 00:20:56.816 "write_zeroes": true, 00:20:56.816 "flush": true, 00:20:56.816 "reset": true, 00:20:56.816 "compare": false, 00:20:56.816 "compare_and_write": false, 00:20:56.816 "abort": true, 00:20:56.816 "nvme_admin": false, 00:20:56.816 "nvme_io": false 00:20:56.816 }, 00:20:56.816 "memory_domains": [ 00:20:56.816 { 00:20:56.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.816 "dma_device_type": 2 00:20:56.816 } 00:20:56.816 ], 00:20:56.816 "driver_specific": {} 00:20:56.816 } 00:20:56.816 ] 00:20:56.816 14:22:48 -- common/autotest_common.sh@905 -- # return 0 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.816 14:22:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.075 14:22:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:57.075 "name": "Existed_Raid", 00:20:57.075 "uuid": "4438c4e9-89f5-4d7d-8898-b47fe065714b", 00:20:57.075 "strip_size_kb": 64, 00:20:57.075 "state": "configuring", 00:20:57.075 "raid_level": "raid5f", 00:20:57.075 "superblock": true, 00:20:57.075 "num_base_bdevs": 3, 00:20:57.075 "num_base_bdevs_discovered": 2, 00:20:57.075 "num_base_bdevs_operational": 3, 00:20:57.075 "base_bdevs_list": [ 00:20:57.075 { 00:20:57.075 "name": "BaseBdev1", 00:20:57.075 "uuid": "82ab7cea-82e6-47cc-b86d-105f92ee538e", 00:20:57.075 "is_configured": true, 00:20:57.075 "data_offset": 2048, 00:20:57.075 "data_size": 63488 00:20:57.075 }, 00:20:57.075 { 00:20:57.075 "name": "BaseBdev2", 00:20:57.075 "uuid": "2799e371-ccc5-410f-945a-5979046153ef", 00:20:57.075 "is_configured": true, 00:20:57.075 "data_offset": 2048, 00:20:57.075 "data_size": 63488 00:20:57.075 }, 00:20:57.075 { 00:20:57.075 "name": "BaseBdev3", 00:20:57.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.075 "is_configured": false, 00:20:57.075 "data_offset": 0, 00:20:57.075 "data_size": 0 00:20:57.075 } 00:20:57.075 ] 00:20:57.075 }' 00:20:57.075 14:22:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:57.075 14:22:49 -- common/autotest_common.sh@10 -- # set +x 00:20:57.641 14:22:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:57.899 [2024-11-18 14:22:49.939563] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:57.899 [2024-11-18 14:22:49.939974] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:20:57.899 [2024-11-18 14:22:49.940098] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:57.899 [2024-11-18 14:22:49.940276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:20:57.899 BaseBdev3 00:20:57.899 [2024-11-18 14:22:49.941049] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:20:57.899 [2024-11-18 14:22:49.941195] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:20:57.899 [2024-11-18 14:22:49.941467] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.899 14:22:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:57.899 14:22:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:57.899 14:22:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:57.899 14:22:49 -- common/autotest_common.sh@899 -- # local i 00:20:57.899 14:22:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:57.899 14:22:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:57.899 14:22:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:58.158 14:22:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:58.417 [ 00:20:58.417 { 00:20:58.417 "name": "BaseBdev3", 00:20:58.417 "aliases": [ 00:20:58.417 "adf6e4ed-9d52-4393-864b-5b14080f4724" 00:20:58.417 ], 00:20:58.417 "product_name": "Malloc disk", 00:20:58.417 "block_size": 512, 00:20:58.417 "num_blocks": 65536, 00:20:58.417 "uuid": "adf6e4ed-9d52-4393-864b-5b14080f4724", 00:20:58.417 "assigned_rate_limits": { 00:20:58.417 "rw_ios_per_sec": 0, 00:20:58.417 "rw_mbytes_per_sec": 0, 00:20:58.417 "r_mbytes_per_sec": 0, 00:20:58.417 "w_mbytes_per_sec": 0 00:20:58.417 }, 00:20:58.417 "claimed": true, 00:20:58.417 "claim_type": "exclusive_write", 00:20:58.417 "zoned": false, 00:20:58.417 "supported_io_types": { 00:20:58.417 "read": true, 00:20:58.417 "write": true, 00:20:58.417 "unmap": true, 00:20:58.417 "write_zeroes": true, 00:20:58.417 "flush": true, 00:20:58.417 "reset": true, 00:20:58.417 "compare": false, 00:20:58.417 "compare_and_write": false, 00:20:58.417 "abort": true, 00:20:58.417 "nvme_admin": false, 00:20:58.417 "nvme_io": false 00:20:58.417 }, 00:20:58.417 "memory_domains": [ 00:20:58.417 { 00:20:58.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.417 "dma_device_type": 2 00:20:58.417 } 00:20:58.417 ], 00:20:58.417 "driver_specific": {} 00:20:58.417 } 00:20:58.417 ] 00:20:58.417 14:22:50 -- common/autotest_common.sh@905 -- # return 0 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.417 14:22:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.676 14:22:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.676 "name": "Existed_Raid", 00:20:58.676 "uuid": "4438c4e9-89f5-4d7d-8898-b47fe065714b", 00:20:58.676 "strip_size_kb": 64, 00:20:58.676 "state": "online", 00:20:58.676 "raid_level": "raid5f", 00:20:58.676 "superblock": true, 00:20:58.676 "num_base_bdevs": 3, 00:20:58.676 "num_base_bdevs_discovered": 3, 00:20:58.676 "num_base_bdevs_operational": 3, 00:20:58.676 "base_bdevs_list": [ 00:20:58.676 { 00:20:58.676 "name": "BaseBdev1", 00:20:58.676 "uuid": "82ab7cea-82e6-47cc-b86d-105f92ee538e", 00:20:58.676 "is_configured": true, 00:20:58.676 "data_offset": 2048, 00:20:58.676 "data_size": 63488 00:20:58.676 }, 00:20:58.676 { 00:20:58.676 "name": "BaseBdev2", 00:20:58.676 "uuid": "2799e371-ccc5-410f-945a-5979046153ef", 00:20:58.676 "is_configured": true, 00:20:58.676 "data_offset": 2048, 00:20:58.676 "data_size": 63488 00:20:58.676 }, 00:20:58.676 { 00:20:58.676 "name": "BaseBdev3", 00:20:58.676 "uuid": "adf6e4ed-9d52-4393-864b-5b14080f4724", 00:20:58.676 "is_configured": true, 00:20:58.676 "data_offset": 2048, 00:20:58.677 "data_size": 63488 00:20:58.677 } 00:20:58.677 ] 00:20:58.677 }' 00:20:58.677 14:22:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.677 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:20:59.245 14:22:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:59.245 [2024-11-18 14:22:51.315916] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.504 14:22:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.763 14:22:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.763 "name": "Existed_Raid", 00:20:59.763 "uuid": "4438c4e9-89f5-4d7d-8898-b47fe065714b", 00:20:59.763 "strip_size_kb": 64, 00:20:59.763 "state": "online", 00:20:59.763 "raid_level": "raid5f", 00:20:59.763 "superblock": true, 00:20:59.763 "num_base_bdevs": 3, 00:20:59.763 "num_base_bdevs_discovered": 2, 00:20:59.763 "num_base_bdevs_operational": 2, 00:20:59.763 "base_bdevs_list": [ 00:20:59.763 { 00:20:59.763 "name": null, 00:20:59.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.763 "is_configured": false, 00:20:59.763 "data_offset": 2048, 00:20:59.763 "data_size": 63488 00:20:59.763 }, 00:20:59.763 { 00:20:59.763 "name": "BaseBdev2", 00:20:59.763 "uuid": "2799e371-ccc5-410f-945a-5979046153ef", 00:20:59.763 "is_configured": true, 00:20:59.763 "data_offset": 2048, 00:20:59.763 "data_size": 63488 00:20:59.763 }, 00:20:59.763 { 00:20:59.763 "name": "BaseBdev3", 00:20:59.763 "uuid": "adf6e4ed-9d52-4393-864b-5b14080f4724", 00:20:59.764 "is_configured": true, 00:20:59.764 "data_offset": 2048, 00:20:59.764 "data_size": 63488 00:20:59.764 } 00:20:59.764 ] 00:20:59.764 }' 00:20:59.764 14:22:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.764 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.377 14:22:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:00.649 [2024-11-18 14:22:52.631702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:00.649 [2024-11-18 14:22:52.631866] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.649 [2024-11-18 14:22:52.632026] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.649 14:22:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:00.649 14:22:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:00.649 14:22:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.649 14:22:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:00.908 14:22:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:00.908 14:22:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:00.908 14:22:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:01.167 [2024-11-18 14:22:53.077009] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:01.167 [2024-11-18 14:22:53.077211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:21:01.167 14:22:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:01.167 14:22:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:01.167 14:22:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.167 14:22:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:01.425 14:22:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:01.425 14:22:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:01.425 14:22:53 -- bdev/bdev_raid.sh@287 -- # killprocess 137170 00:21:01.425 14:22:53 -- common/autotest_common.sh@936 -- # '[' -z 137170 ']' 00:21:01.425 14:22:53 -- common/autotest_common.sh@940 -- # kill -0 137170 00:21:01.425 14:22:53 -- common/autotest_common.sh@941 -- # uname 00:21:01.425 14:22:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.425 14:22:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137170 00:21:01.425 14:22:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:01.425 14:22:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:01.425 killing process with pid 137170 00:21:01.425 14:22:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137170' 00:21:01.425 14:22:53 -- common/autotest_common.sh@955 -- # kill 137170 00:21:01.425 [2024-11-18 14:22:53.328970] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:01.425 [2024-11-18 14:22:53.329049] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.425 14:22:53 -- common/autotest_common.sh@960 -- # wait 137170 00:21:01.684 14:22:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:01.684 00:21:01.684 real 0m11.513s 00:21:01.684 user 0m21.186s 00:21:01.684 sys 0m1.366s 00:21:01.684 14:22:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:01.685 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:21:01.685 ************************************ 00:21:01.685 END TEST raid5f_state_function_test_sb 00:21:01.685 ************************************ 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:21:01.685 14:22:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:01.685 14:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:01.685 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:21:01.685 ************************************ 00:21:01.685 START TEST raid5f_superblock_test 00:21:01.685 ************************************ 00:21:01.685 14:22:53 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=137550 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 137550 /var/tmp/spdk-raid.sock 00:21:01.685 14:22:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:01.685 14:22:53 -- common/autotest_common.sh@829 -- # '[' -z 137550 ']' 00:21:01.685 14:22:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.685 14:22:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:01.685 14:22:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.685 14:22:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:01.685 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:21:01.685 [2024-11-18 14:22:53.740114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:01.685 [2024-11-18 14:22:53.740506] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137550 ] 00:21:01.944 [2024-11-18 14:22:53.877504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.944 [2024-11-18 14:22:53.945891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.944 [2024-11-18 14:22:54.015777] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.879 14:22:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.879 14:22:54 -- common/autotest_common.sh@862 -- # return 0 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:02.879 malloc1 00:21:02.879 14:22:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:03.137 [2024-11-18 14:22:55.132134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:03.137 [2024-11-18 14:22:55.132451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.137 [2024-11-18 14:22:55.132607] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:03.138 [2024-11-18 14:22:55.132759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.138 [2024-11-18 14:22:55.135216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.138 [2024-11-18 14:22:55.135394] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:03.138 pt1 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:03.138 14:22:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:03.396 malloc2 00:21:03.396 14:22:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:03.656 [2024-11-18 14:22:55.577618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:03.656 [2024-11-18 14:22:55.577814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.656 [2024-11-18 14:22:55.577951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:03.656 [2024-11-18 14:22:55.578088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.656 [2024-11-18 14:22:55.580414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.656 [2024-11-18 14:22:55.580584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:03.656 pt2 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:03.656 14:22:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:03.914 malloc3 00:21:03.914 14:22:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:04.173 [2024-11-18 14:22:55.994248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:04.173 [2024-11-18 14:22:55.994459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.173 [2024-11-18 14:22:55.994536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:04.173 [2024-11-18 14:22:55.994832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.173 [2024-11-18 14:22:55.997206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.173 [2024-11-18 14:22:55.997369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:04.173 pt3 00:21:04.173 14:22:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:21:04.173 14:22:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:21:04.173 14:22:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:04.173 [2024-11-18 14:22:56.178371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:04.173 [2024-11-18 14:22:56.180523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.173 [2024-11-18 14:22:56.180705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:04.173 [2024-11-18 14:22:56.180960] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:21:04.173 [2024-11-18 14:22:56.181094] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:04.173 [2024-11-18 14:22:56.181290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:04.174 [2024-11-18 14:22:56.182110] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:21:04.174 [2024-11-18 14:22:56.182226] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:21:04.174 [2024-11-18 14:22:56.182499] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.174 14:22:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.433 14:22:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.433 "name": "raid_bdev1", 00:21:04.433 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:04.433 "strip_size_kb": 64, 00:21:04.433 "state": "online", 00:21:04.433 "raid_level": "raid5f", 00:21:04.433 "superblock": true, 00:21:04.433 "num_base_bdevs": 3, 00:21:04.433 "num_base_bdevs_discovered": 3, 00:21:04.433 "num_base_bdevs_operational": 3, 00:21:04.433 "base_bdevs_list": [ 00:21:04.433 { 00:21:04.433 "name": "pt1", 00:21:04.433 "uuid": "cf6b08ef-120d-5e54-ae6e-f455c4f69346", 00:21:04.433 "is_configured": true, 00:21:04.433 "data_offset": 2048, 00:21:04.433 "data_size": 63488 00:21:04.433 }, 00:21:04.433 { 00:21:04.433 "name": "pt2", 00:21:04.433 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:04.433 "is_configured": true, 00:21:04.433 "data_offset": 2048, 00:21:04.433 "data_size": 63488 00:21:04.433 }, 00:21:04.433 { 00:21:04.433 "name": "pt3", 00:21:04.433 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:04.433 "is_configured": true, 00:21:04.433 "data_offset": 2048, 00:21:04.433 "data_size": 63488 00:21:04.433 } 00:21:04.433 ] 00:21:04.433 }' 00:21:04.433 14:22:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.433 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:21:04.999 14:22:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.999 14:22:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:21:05.256 [2024-11-18 14:22:57.234769] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.256 14:22:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7f61c275-ab68-4fe9-b05d-5f8a02658ec5 00:21:05.256 14:22:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 7f61c275-ab68-4fe9-b05d-5f8a02658ec5 ']' 00:21:05.256 14:22:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:05.513 [2024-11-18 14:22:57.482665] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.513 [2024-11-18 14:22:57.482810] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.513 [2024-11-18 14:22:57.482992] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.513 [2024-11-18 14:22:57.483208] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.513 [2024-11-18 14:22:57.483336] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:21:05.513 14:22:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.513 14:22:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:21:05.772 14:22:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:21:05.772 14:22:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:21:05.772 14:22:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.772 14:22:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:06.030 14:22:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.030 14:22:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:06.288 14:22:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:21:06.288 14:22:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:06.288 14:22:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:06.288 14:22:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:06.548 14:22:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:21:06.548 14:22:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.548 14:22:58 -- common/autotest_common.sh@650 -- # local es=0 00:21:06.548 14:22:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.548 14:22:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.548 14:22:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.548 14:22:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.548 14:22:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.548 14:22:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.548 14:22:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.548 14:22:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.548 14:22:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:06.548 14:22:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.807 [2024-11-18 14:22:58.726847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:06.807 [2024-11-18 14:22:58.728977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:06.807 [2024-11-18 14:22:58.729146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:06.807 [2024-11-18 14:22:58.729230] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:21:06.807 [2024-11-18 14:22:58.729422] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:21:06.807 [2024-11-18 14:22:58.729600] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:21:06.807 [2024-11-18 14:22:58.729758] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.807 [2024-11-18 14:22:58.729857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:21:06.807 request: 00:21:06.807 { 00:21:06.807 "name": "raid_bdev1", 00:21:06.807 "raid_level": "raid5f", 00:21:06.807 "base_bdevs": [ 00:21:06.807 "malloc1", 00:21:06.807 "malloc2", 00:21:06.807 "malloc3" 00:21:06.807 ], 00:21:06.807 "superblock": false, 00:21:06.807 "strip_size_kb": 64, 00:21:06.807 "method": "bdev_raid_create", 00:21:06.807 "req_id": 1 00:21:06.807 } 00:21:06.807 Got JSON-RPC error response 00:21:06.807 response: 00:21:06.807 { 00:21:06.807 "code": -17, 00:21:06.807 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:06.807 } 00:21:06.807 14:22:58 -- common/autotest_common.sh@653 -- # es=1 00:21:06.807 14:22:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.807 14:22:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.807 14:22:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.807 14:22:58 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.807 14:22:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:21:07.067 14:22:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:21:07.067 14:22:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:21:07.067 14:22:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:07.067 [2024-11-18 14:22:59.098863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:07.067 [2024-11-18 14:22:59.099023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.067 [2024-11-18 14:22:59.099092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:07.067 [2024-11-18 14:22:59.099237] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.067 [2024-11-18 14:22:59.101500] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.067 [2024-11-18 14:22:59.101639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:07.067 [2024-11-18 14:22:59.101845] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:07.067 [2024-11-18 14:22:59.102010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:07.067 pt1 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.067 14:22:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.325 14:22:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.325 "name": "raid_bdev1", 00:21:07.325 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:07.325 "strip_size_kb": 64, 00:21:07.325 "state": "configuring", 00:21:07.325 "raid_level": "raid5f", 00:21:07.325 "superblock": true, 00:21:07.325 "num_base_bdevs": 3, 00:21:07.325 "num_base_bdevs_discovered": 1, 00:21:07.325 "num_base_bdevs_operational": 3, 00:21:07.325 "base_bdevs_list": [ 00:21:07.325 { 00:21:07.325 "name": "pt1", 00:21:07.325 "uuid": "cf6b08ef-120d-5e54-ae6e-f455c4f69346", 00:21:07.325 "is_configured": true, 00:21:07.325 "data_offset": 2048, 00:21:07.325 "data_size": 63488 00:21:07.325 }, 00:21:07.325 { 00:21:07.325 "name": null, 00:21:07.325 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:07.325 "is_configured": false, 00:21:07.325 "data_offset": 2048, 00:21:07.325 "data_size": 63488 00:21:07.325 }, 00:21:07.325 { 00:21:07.325 "name": null, 00:21:07.325 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:07.325 "is_configured": false, 00:21:07.325 "data_offset": 2048, 00:21:07.325 "data_size": 63488 00:21:07.325 } 00:21:07.325 ] 00:21:07.325 }' 00:21:07.325 14:22:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.325 14:22:59 -- common/autotest_common.sh@10 -- # set +x 00:21:07.892 14:22:59 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:21:07.892 14:22:59 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:08.150 [2024-11-18 14:23:00.083039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:08.150 [2024-11-18 14:23:00.083245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.150 [2024-11-18 14:23:00.083323] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:08.150 [2024-11-18 14:23:00.083620] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.150 [2024-11-18 14:23:00.084078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.150 [2024-11-18 14:23:00.084239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:08.150 [2024-11-18 14:23:00.084421] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:08.150 [2024-11-18 14:23:00.084535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.150 pt2 00:21:08.150 14:23:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:08.409 [2024-11-18 14:23:00.335095] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.409 14:23:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.668 14:23:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.668 "name": "raid_bdev1", 00:21:08.668 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:08.668 "strip_size_kb": 64, 00:21:08.668 "state": "configuring", 00:21:08.668 "raid_level": "raid5f", 00:21:08.668 "superblock": true, 00:21:08.668 "num_base_bdevs": 3, 00:21:08.668 "num_base_bdevs_discovered": 1, 00:21:08.668 "num_base_bdevs_operational": 3, 00:21:08.668 "base_bdevs_list": [ 00:21:08.668 { 00:21:08.668 "name": "pt1", 00:21:08.668 "uuid": "cf6b08ef-120d-5e54-ae6e-f455c4f69346", 00:21:08.668 "is_configured": true, 00:21:08.668 "data_offset": 2048, 00:21:08.668 "data_size": 63488 00:21:08.668 }, 00:21:08.668 { 00:21:08.668 "name": null, 00:21:08.668 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:08.668 "is_configured": false, 00:21:08.668 "data_offset": 2048, 00:21:08.668 "data_size": 63488 00:21:08.668 }, 00:21:08.668 { 00:21:08.668 "name": null, 00:21:08.668 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:08.668 "is_configured": false, 00:21:08.668 "data_offset": 2048, 00:21:08.668 "data_size": 63488 00:21:08.668 } 00:21:08.668 ] 00:21:08.668 }' 00:21:08.668 14:23:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.668 14:23:00 -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 14:23:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:21:09.234 14:23:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:09.235 14:23:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.235 [2024-11-18 14:23:01.279257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.235 [2024-11-18 14:23:01.279450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.235 [2024-11-18 14:23:01.279516] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:09.235 [2024-11-18 14:23:01.279783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.235 [2024-11-18 14:23:01.280166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.235 [2024-11-18 14:23:01.280326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.235 [2024-11-18 14:23:01.280512] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:09.235 [2024-11-18 14:23:01.280661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.235 pt2 00:21:09.235 14:23:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:09.235 14:23:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:09.235 14:23:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:09.493 [2024-11-18 14:23:01.467323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:09.493 [2024-11-18 14:23:01.467510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.494 [2024-11-18 14:23:01.467574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:09.494 [2024-11-18 14:23:01.467788] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.494 [2024-11-18 14:23:01.468168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.494 [2024-11-18 14:23:01.468304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:09.494 [2024-11-18 14:23:01.468489] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:09.494 [2024-11-18 14:23:01.468613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:09.494 [2024-11-18 14:23:01.468763] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:09.494 [2024-11-18 14:23:01.468864] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:09.494 [2024-11-18 14:23:01.469079] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:21:09.494 [2024-11-18 14:23:01.469727] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:09.494 [2024-11-18 14:23:01.469856] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:09.494 [2024-11-18 14:23:01.470064] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.494 pt3 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.494 14:23:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.752 14:23:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:09.752 "name": "raid_bdev1", 00:21:09.752 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:09.752 "strip_size_kb": 64, 00:21:09.752 "state": "online", 00:21:09.752 "raid_level": "raid5f", 00:21:09.752 "superblock": true, 00:21:09.752 "num_base_bdevs": 3, 00:21:09.752 "num_base_bdevs_discovered": 3, 00:21:09.752 "num_base_bdevs_operational": 3, 00:21:09.752 "base_bdevs_list": [ 00:21:09.752 { 00:21:09.752 "name": "pt1", 00:21:09.752 "uuid": "cf6b08ef-120d-5e54-ae6e-f455c4f69346", 00:21:09.752 "is_configured": true, 00:21:09.752 "data_offset": 2048, 00:21:09.752 "data_size": 63488 00:21:09.752 }, 00:21:09.752 { 00:21:09.752 "name": "pt2", 00:21:09.752 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:09.752 "is_configured": true, 00:21:09.752 "data_offset": 2048, 00:21:09.752 "data_size": 63488 00:21:09.753 }, 00:21:09.753 { 00:21:09.753 "name": "pt3", 00:21:09.753 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:09.753 "is_configured": true, 00:21:09.753 "data_offset": 2048, 00:21:09.753 "data_size": 63488 00:21:09.753 } 00:21:09.753 ] 00:21:09.753 }' 00:21:09.753 14:23:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:09.753 14:23:01 -- common/autotest_common.sh@10 -- # set +x 00:21:10.320 14:23:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.320 14:23:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:10.578 [2024-11-18 14:23:02.615624] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.578 14:23:02 -- bdev/bdev_raid.sh@430 -- # '[' 7f61c275-ab68-4fe9-b05d-5f8a02658ec5 '!=' 7f61c275-ab68-4fe9-b05d-5f8a02658ec5 ']' 00:21:10.578 14:23:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:21:10.578 14:23:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:10.578 14:23:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:10.578 14:23:02 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:10.837 [2024-11-18 14:23:02.811569] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.837 14:23:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.096 14:23:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.096 "name": "raid_bdev1", 00:21:11.096 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:11.096 "strip_size_kb": 64, 00:21:11.096 "state": "online", 00:21:11.096 "raid_level": "raid5f", 00:21:11.096 "superblock": true, 00:21:11.096 "num_base_bdevs": 3, 00:21:11.096 "num_base_bdevs_discovered": 2, 00:21:11.096 "num_base_bdevs_operational": 2, 00:21:11.096 "base_bdevs_list": [ 00:21:11.096 { 00:21:11.096 "name": null, 00:21:11.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.096 "is_configured": false, 00:21:11.096 "data_offset": 2048, 00:21:11.096 "data_size": 63488 00:21:11.096 }, 00:21:11.096 { 00:21:11.096 "name": "pt2", 00:21:11.096 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:11.096 "is_configured": true, 00:21:11.096 "data_offset": 2048, 00:21:11.096 "data_size": 63488 00:21:11.096 }, 00:21:11.096 { 00:21:11.096 "name": "pt3", 00:21:11.096 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:11.096 "is_configured": true, 00:21:11.096 "data_offset": 2048, 00:21:11.096 "data_size": 63488 00:21:11.096 } 00:21:11.096 ] 00:21:11.096 }' 00:21:11.096 14:23:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.096 14:23:03 -- common/autotest_common.sh@10 -- # set +x 00:21:11.663 14:23:03 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:11.921 [2024-11-18 14:23:03.875737] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:11.921 [2024-11-18 14:23:03.875761] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.921 [2024-11-18 14:23:03.875806] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.921 [2024-11-18 14:23:03.875853] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.921 [2024-11-18 14:23:03.875863] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:21:11.921 14:23:03 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.921 14:23:03 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:21:12.180 14:23:04 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:21:12.180 14:23:04 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:21:12.180 14:23:04 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:21:12.180 14:23:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:12.180 14:23:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:12.439 14:23:04 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:12.698 [2024-11-18 14:23:04.655837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:12.698 [2024-11-18 14:23:04.655890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.698 [2024-11-18 14:23:04.655922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:12.698 [2024-11-18 14:23:04.655943] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.698 [2024-11-18 14:23:04.658122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.698 [2024-11-18 14:23:04.658183] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:12.698 [2024-11-18 14:23:04.658265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:12.698 [2024-11-18 14:23:04.658292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:12.698 pt2 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.698 14:23:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.957 14:23:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:12.957 "name": "raid_bdev1", 00:21:12.957 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:12.957 "strip_size_kb": 64, 00:21:12.957 "state": "configuring", 00:21:12.957 "raid_level": "raid5f", 00:21:12.957 "superblock": true, 00:21:12.957 "num_base_bdevs": 3, 00:21:12.957 "num_base_bdevs_discovered": 1, 00:21:12.957 "num_base_bdevs_operational": 2, 00:21:12.957 "base_bdevs_list": [ 00:21:12.957 { 00:21:12.957 "name": null, 00:21:12.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.957 "is_configured": false, 00:21:12.957 "data_offset": 2048, 00:21:12.957 "data_size": 63488 00:21:12.957 }, 00:21:12.957 { 00:21:12.957 "name": "pt2", 00:21:12.957 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:12.957 "is_configured": true, 00:21:12.957 "data_offset": 2048, 00:21:12.957 "data_size": 63488 00:21:12.957 }, 00:21:12.957 { 00:21:12.957 "name": null, 00:21:12.957 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:12.957 "is_configured": false, 00:21:12.957 "data_offset": 2048, 00:21:12.957 "data_size": 63488 00:21:12.957 } 00:21:12.957 ] 00:21:12.957 }' 00:21:12.957 14:23:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:12.957 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:21:13.523 14:23:05 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:21:13.523 14:23:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:13.523 14:23:05 -- bdev/bdev_raid.sh@462 -- # i=2 00:21:13.523 14:23:05 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:13.782 [2024-11-18 14:23:05.664003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:13.782 [2024-11-18 14:23:05.664056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:13.782 [2024-11-18 14:23:05.664089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:13.782 [2024-11-18 14:23:05.664109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:13.782 [2024-11-18 14:23:05.664441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:13.782 [2024-11-18 14:23:05.664474] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:13.782 [2024-11-18 14:23:05.664549] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:13.782 [2024-11-18 14:23:05.664570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:13.782 [2024-11-18 14:23:05.664650] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:21:13.782 [2024-11-18 14:23:05.664662] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:13.782 [2024-11-18 14:23:05.664721] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:21:13.782 [2024-11-18 14:23:05.665320] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:21:13.782 [2024-11-18 14:23:05.665335] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:21:13.782 [2024-11-18 14:23:05.665548] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.782 pt3 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.782 14:23:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.041 14:23:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:14.041 "name": "raid_bdev1", 00:21:14.041 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:14.041 "strip_size_kb": 64, 00:21:14.041 "state": "online", 00:21:14.041 "raid_level": "raid5f", 00:21:14.041 "superblock": true, 00:21:14.041 "num_base_bdevs": 3, 00:21:14.041 "num_base_bdevs_discovered": 2, 00:21:14.041 "num_base_bdevs_operational": 2, 00:21:14.041 "base_bdevs_list": [ 00:21:14.041 { 00:21:14.041 "name": null, 00:21:14.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.041 "is_configured": false, 00:21:14.041 "data_offset": 2048, 00:21:14.041 "data_size": 63488 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "name": "pt2", 00:21:14.041 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:14.041 "is_configured": true, 00:21:14.041 "data_offset": 2048, 00:21:14.041 "data_size": 63488 00:21:14.041 }, 00:21:14.041 { 00:21:14.041 "name": "pt3", 00:21:14.041 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:14.041 "is_configured": true, 00:21:14.041 "data_offset": 2048, 00:21:14.041 "data_size": 63488 00:21:14.041 } 00:21:14.041 ] 00:21:14.041 }' 00:21:14.041 14:23:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:14.041 14:23:05 -- common/autotest_common.sh@10 -- # set +x 00:21:14.609 14:23:06 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:21:14.609 14:23:06 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:14.867 [2024-11-18 14:23:06.748182] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.867 [2024-11-18 14:23:06.748205] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.867 [2024-11-18 14:23:06.748244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.867 [2024-11-18 14:23:06.748289] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.867 [2024-11-18 14:23:06.748298] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:21:14.867 14:23:06 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.867 14:23:06 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:15.126 [2024-11-18 14:23:07.179352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:15.126 [2024-11-18 14:23:07.179408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.126 [2024-11-18 14:23:07.179443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:15.126 [2024-11-18 14:23:07.179463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:15.126 [2024-11-18 14:23:07.181372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:15.126 [2024-11-18 14:23:07.181418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:15.126 [2024-11-18 14:23:07.181496] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:15.126 [2024-11-18 14:23:07.181528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:15.126 pt1 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.126 14:23:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.386 14:23:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:15.386 "name": "raid_bdev1", 00:21:15.386 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:15.386 "strip_size_kb": 64, 00:21:15.386 "state": "configuring", 00:21:15.386 "raid_level": "raid5f", 00:21:15.386 "superblock": true, 00:21:15.386 "num_base_bdevs": 3, 00:21:15.386 "num_base_bdevs_discovered": 1, 00:21:15.386 "num_base_bdevs_operational": 3, 00:21:15.386 "base_bdevs_list": [ 00:21:15.386 { 00:21:15.386 "name": "pt1", 00:21:15.386 "uuid": "cf6b08ef-120d-5e54-ae6e-f455c4f69346", 00:21:15.386 "is_configured": true, 00:21:15.386 "data_offset": 2048, 00:21:15.386 "data_size": 63488 00:21:15.386 }, 00:21:15.386 { 00:21:15.386 "name": null, 00:21:15.386 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:15.386 "is_configured": false, 00:21:15.386 "data_offset": 2048, 00:21:15.386 "data_size": 63488 00:21:15.386 }, 00:21:15.386 { 00:21:15.386 "name": null, 00:21:15.386 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:15.386 "is_configured": false, 00:21:15.386 "data_offset": 2048, 00:21:15.386 "data_size": 63488 00:21:15.386 } 00:21:15.386 ] 00:21:15.386 }' 00:21:15.386 14:23:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:15.386 14:23:07 -- common/autotest_common.sh@10 -- # set +x 00:21:16.321 14:23:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:21:16.321 14:23:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:16.321 14:23:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:16.321 14:23:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:16.321 14:23:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:16.321 14:23:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:16.579 14:23:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:16.579 14:23:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:16.579 14:23:08 -- bdev/bdev_raid.sh@489 -- # i=2 00:21:16.579 14:23:08 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:16.837 [2024-11-18 14:23:08.763611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:16.837 [2024-11-18 14:23:08.763671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.837 [2024-11-18 14:23:08.763698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:16.837 [2024-11-18 14:23:08.763725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.837 [2024-11-18 14:23:08.764051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.837 [2024-11-18 14:23:08.764095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:16.837 [2024-11-18 14:23:08.764168] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:16.837 [2024-11-18 14:23:08.764181] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:16.837 [2024-11-18 14:23:08.764187] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.837 [2024-11-18 14:23:08.764216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:16.837 [2024-11-18 14:23:08.764264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:16.837 pt3 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:16.837 14:23:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:16.838 14:23:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.838 14:23:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.096 14:23:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.096 "name": "raid_bdev1", 00:21:17.096 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:17.096 "strip_size_kb": 64, 00:21:17.096 "state": "configuring", 00:21:17.096 "raid_level": "raid5f", 00:21:17.096 "superblock": true, 00:21:17.096 "num_base_bdevs": 3, 00:21:17.096 "num_base_bdevs_discovered": 1, 00:21:17.096 "num_base_bdevs_operational": 2, 00:21:17.096 "base_bdevs_list": [ 00:21:17.096 { 00:21:17.096 "name": null, 00:21:17.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.096 "is_configured": false, 00:21:17.096 "data_offset": 2048, 00:21:17.096 "data_size": 63488 00:21:17.096 }, 00:21:17.096 { 00:21:17.096 "name": null, 00:21:17.096 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:17.096 "is_configured": false, 00:21:17.096 "data_offset": 2048, 00:21:17.096 "data_size": 63488 00:21:17.096 }, 00:21:17.097 { 00:21:17.097 "name": "pt3", 00:21:17.097 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:17.097 "is_configured": true, 00:21:17.097 "data_offset": 2048, 00:21:17.097 "data_size": 63488 00:21:17.097 } 00:21:17.097 ] 00:21:17.097 }' 00:21:17.097 14:23:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.097 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:21:17.662 14:23:09 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:21:17.663 14:23:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:17.663 14:23:09 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:17.921 [2024-11-18 14:23:09.843794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:17.921 [2024-11-18 14:23:09.843853] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.921 [2024-11-18 14:23:09.843880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:17.921 [2024-11-18 14:23:09.843904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.921 [2024-11-18 14:23:09.844218] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.921 [2024-11-18 14:23:09.844253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:17.921 [2024-11-18 14:23:09.844313] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:17.921 [2024-11-18 14:23:09.844332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:17.921 [2024-11-18 14:23:09.844415] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:17.921 [2024-11-18 14:23:09.844426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:17.921 [2024-11-18 14:23:09.844483] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:21:17.921 [2024-11-18 14:23:09.845069] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:17.921 [2024-11-18 14:23:09.845083] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:17.921 [2024-11-18 14:23:09.845211] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.921 pt2 00:21:17.921 14:23:09 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.922 14:23:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.180 14:23:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.180 "name": "raid_bdev1", 00:21:18.180 "uuid": "7f61c275-ab68-4fe9-b05d-5f8a02658ec5", 00:21:18.180 "strip_size_kb": 64, 00:21:18.180 "state": "online", 00:21:18.180 "raid_level": "raid5f", 00:21:18.180 "superblock": true, 00:21:18.180 "num_base_bdevs": 3, 00:21:18.180 "num_base_bdevs_discovered": 2, 00:21:18.180 "num_base_bdevs_operational": 2, 00:21:18.180 "base_bdevs_list": [ 00:21:18.180 { 00:21:18.180 "name": null, 00:21:18.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.180 "is_configured": false, 00:21:18.180 "data_offset": 2048, 00:21:18.180 "data_size": 63488 00:21:18.180 }, 00:21:18.180 { 00:21:18.180 "name": "pt2", 00:21:18.180 "uuid": "609d846f-6555-5295-ad03-56c5d681ecfa", 00:21:18.180 "is_configured": true, 00:21:18.180 "data_offset": 2048, 00:21:18.180 "data_size": 63488 00:21:18.180 }, 00:21:18.180 { 00:21:18.180 "name": "pt3", 00:21:18.180 "uuid": "98fe2d00-3b73-5f20-be20-49889ab2a1d9", 00:21:18.180 "is_configured": true, 00:21:18.180 "data_offset": 2048, 00:21:18.180 "data_size": 63488 00:21:18.180 } 00:21:18.180 ] 00:21:18.180 }' 00:21:18.180 14:23:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.180 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.747 14:23:10 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:21:18.747 14:23:10 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:19.006 [2024-11-18 14:23:10.888084] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.006 14:23:10 -- bdev/bdev_raid.sh@506 -- # '[' 7f61c275-ab68-4fe9-b05d-5f8a02658ec5 '!=' 7f61c275-ab68-4fe9-b05d-5f8a02658ec5 ']' 00:21:19.006 14:23:10 -- bdev/bdev_raid.sh@511 -- # killprocess 137550 00:21:19.006 14:23:10 -- common/autotest_common.sh@936 -- # '[' -z 137550 ']' 00:21:19.006 14:23:10 -- common/autotest_common.sh@940 -- # kill -0 137550 00:21:19.006 14:23:10 -- common/autotest_common.sh@941 -- # uname 00:21:19.006 14:23:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.006 14:23:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137550 00:21:19.006 killing process with pid 137550 00:21:19.006 14:23:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:19.006 14:23:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:19.006 14:23:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137550' 00:21:19.006 14:23:10 -- common/autotest_common.sh@955 -- # kill 137550 00:21:19.006 [2024-11-18 14:23:10.925304] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:19.006 [2024-11-18 14:23:10.925354] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.006 [2024-11-18 14:23:10.925398] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.006 14:23:10 -- common/autotest_common.sh@960 -- # wait 137550 00:21:19.006 [2024-11-18 14:23:10.925407] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:19.006 [2024-11-18 14:23:10.961062] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:19.266 ************************************ 00:21:19.266 END TEST raid5f_superblock_test 00:21:19.266 ************************************ 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:19.266 00:21:19.266 real 0m17.562s 00:21:19.266 user 0m33.128s 00:21:19.266 sys 0m1.914s 00:21:19.266 14:23:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:19.266 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:21:19.266 14:23:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:19.266 14:23:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:19.266 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:21:19.266 ************************************ 00:21:19.266 START TEST raid5f_rebuild_test 00:21:19.266 ************************************ 00:21:19.266 14:23:11 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@544 -- # raid_pid=138136 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138136 /var/tmp/spdk-raid.sock 00:21:19.266 14:23:11 -- common/autotest_common.sh@829 -- # '[' -z 138136 ']' 00:21:19.266 14:23:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:19.266 14:23:11 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:19.266 14:23:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:19.266 14:23:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:19.266 14:23:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.266 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:21:19.525 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:19.525 Zero copy mechanism will not be used. 00:21:19.525 [2024-11-18 14:23:11.359876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:19.525 [2024-11-18 14:23:11.360063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138136 ] 00:21:19.525 [2024-11-18 14:23:11.494826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.525 [2024-11-18 14:23:11.566127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.785 [2024-11-18 14:23:11.636111] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:20.351 14:23:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.351 14:23:12 -- common/autotest_common.sh@862 -- # return 0 00:21:20.351 14:23:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:20.352 14:23:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:20.352 14:23:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:20.610 BaseBdev1 00:21:20.610 14:23:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:20.610 14:23:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:20.610 14:23:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:20.868 BaseBdev2 00:21:20.868 14:23:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:20.868 14:23:12 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:20.868 14:23:12 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:21.127 BaseBdev3 00:21:21.127 14:23:12 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:21.127 spare_malloc 00:21:21.127 14:23:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:21.386 spare_delay 00:21:21.386 14:23:13 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:21.646 [2024-11-18 14:23:13.535305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:21.646 [2024-11-18 14:23:13.535423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.646 [2024-11-18 14:23:13.535468] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:21.646 [2024-11-18 14:23:13.535522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.646 [2024-11-18 14:23:13.537926] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.646 [2024-11-18 14:23:13.537983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:21.646 spare 00:21:21.646 14:23:13 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:21:21.646 [2024-11-18 14:23:13.715402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:21.646 [2024-11-18 14:23:13.717368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.646 [2024-11-18 14:23:13.717424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:21.646 [2024-11-18 14:23:13.717506] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:21:21.646 [2024-11-18 14:23:13.717528] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:21.646 [2024-11-18 14:23:13.717689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:21:21.646 [2024-11-18 14:23:13.718421] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:21:21.646 [2024-11-18 14:23:13.718444] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:21:21.646 [2024-11-18 14:23:13.718604] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.905 "name": "raid_bdev1", 00:21:21.905 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:21.905 "strip_size_kb": 64, 00:21:21.905 "state": "online", 00:21:21.905 "raid_level": "raid5f", 00:21:21.905 "superblock": false, 00:21:21.905 "num_base_bdevs": 3, 00:21:21.905 "num_base_bdevs_discovered": 3, 00:21:21.905 "num_base_bdevs_operational": 3, 00:21:21.905 "base_bdevs_list": [ 00:21:21.905 { 00:21:21.905 "name": "BaseBdev1", 00:21:21.905 "uuid": "fedca782-7fdf-42cc-9a7d-72c2038489d7", 00:21:21.905 "is_configured": true, 00:21:21.905 "data_offset": 0, 00:21:21.905 "data_size": 65536 00:21:21.905 }, 00:21:21.905 { 00:21:21.905 "name": "BaseBdev2", 00:21:21.905 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:21.905 "is_configured": true, 00:21:21.905 "data_offset": 0, 00:21:21.905 "data_size": 65536 00:21:21.905 }, 00:21:21.905 { 00:21:21.905 "name": "BaseBdev3", 00:21:21.905 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:21.905 "is_configured": true, 00:21:21.905 "data_offset": 0, 00:21:21.905 "data_size": 65536 00:21:21.905 } 00:21:21.905 ] 00:21:21.905 }' 00:21:21.905 14:23:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.905 14:23:13 -- common/autotest_common.sh@10 -- # set +x 00:21:22.472 14:23:14 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:22.472 14:23:14 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:22.731 [2024-11-18 14:23:14.760872] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.731 14:23:14 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:21:22.731 14:23:14 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.731 14:23:14 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:22.990 14:23:14 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:22.990 14:23:14 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:22.990 14:23:14 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:22.990 14:23:14 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@12 -- # local i 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:22.990 14:23:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:23.249 [2024-11-18 14:23:15.136814] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:21:23.249 /dev/nbd0 00:21:23.249 14:23:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:23.249 14:23:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:23.249 14:23:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:23.249 14:23:15 -- common/autotest_common.sh@867 -- # local i 00:21:23.249 14:23:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:23.249 14:23:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:23.249 14:23:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:23.249 14:23:15 -- common/autotest_common.sh@871 -- # break 00:21:23.249 14:23:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:23.249 14:23:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:23.249 14:23:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:23.249 1+0 records in 00:21:23.249 1+0 records out 00:21:23.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249495 s, 16.4 MB/s 00:21:23.249 14:23:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.249 14:23:15 -- common/autotest_common.sh@884 -- # size=4096 00:21:23.249 14:23:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:23.249 14:23:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:23.249 14:23:15 -- common/autotest_common.sh@887 -- # return 0 00:21:23.249 14:23:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:23.249 14:23:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:23.249 14:23:15 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:21:23.249 14:23:15 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:21:23.249 14:23:15 -- bdev/bdev_raid.sh@582 -- # echo 128 00:21:23.249 14:23:15 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:21:23.817 512+0 records in 00:21:23.817 512+0 records out 00:21:23.817 67108864 bytes (67 MB, 64 MiB) copied, 0.381408 s, 176 MB/s 00:21:23.817 14:23:15 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@51 -- # local i 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@41 -- # break 00:21:23.817 [2024-11-18 14:23:15.868362] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.817 14:23:15 -- bdev/nbd_common.sh@45 -- # return 0 00:21:23.817 14:23:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:24.076 [2024-11-18 14:23:16.040006] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.076 14:23:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.334 14:23:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:24.334 "name": "raid_bdev1", 00:21:24.334 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:24.334 "strip_size_kb": 64, 00:21:24.334 "state": "online", 00:21:24.334 "raid_level": "raid5f", 00:21:24.334 "superblock": false, 00:21:24.334 "num_base_bdevs": 3, 00:21:24.334 "num_base_bdevs_discovered": 2, 00:21:24.334 "num_base_bdevs_operational": 2, 00:21:24.334 "base_bdevs_list": [ 00:21:24.334 { 00:21:24.334 "name": null, 00:21:24.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.334 "is_configured": false, 00:21:24.334 "data_offset": 0, 00:21:24.334 "data_size": 65536 00:21:24.334 }, 00:21:24.334 { 00:21:24.334 "name": "BaseBdev2", 00:21:24.334 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:24.334 "is_configured": true, 00:21:24.334 "data_offset": 0, 00:21:24.334 "data_size": 65536 00:21:24.334 }, 00:21:24.334 { 00:21:24.334 "name": "BaseBdev3", 00:21:24.334 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:24.334 "is_configured": true, 00:21:24.334 "data_offset": 0, 00:21:24.334 "data_size": 65536 00:21:24.334 } 00:21:24.334 ] 00:21:24.334 }' 00:21:24.334 14:23:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:24.334 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:21:24.903 14:23:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:25.163 [2024-11-18 14:23:17.096163] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:25.163 [2024-11-18 14:23:17.096214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:25.163 [2024-11-18 14:23:17.102381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027990 00:21:25.163 [2024-11-18 14:23:17.104741] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.163 14:23:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.100 14:23:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.373 14:23:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.373 "name": "raid_bdev1", 00:21:26.373 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:26.373 "strip_size_kb": 64, 00:21:26.373 "state": "online", 00:21:26.373 "raid_level": "raid5f", 00:21:26.373 "superblock": false, 00:21:26.373 "num_base_bdevs": 3, 00:21:26.373 "num_base_bdevs_discovered": 3, 00:21:26.373 "num_base_bdevs_operational": 3, 00:21:26.373 "process": { 00:21:26.373 "type": "rebuild", 00:21:26.373 "target": "spare", 00:21:26.373 "progress": { 00:21:26.373 "blocks": 24576, 00:21:26.373 "percent": 18 00:21:26.373 } 00:21:26.373 }, 00:21:26.373 "base_bdevs_list": [ 00:21:26.373 { 00:21:26.373 "name": "spare", 00:21:26.373 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:26.373 "is_configured": true, 00:21:26.373 "data_offset": 0, 00:21:26.373 "data_size": 65536 00:21:26.373 }, 00:21:26.373 { 00:21:26.373 "name": "BaseBdev2", 00:21:26.373 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:26.373 "is_configured": true, 00:21:26.373 "data_offset": 0, 00:21:26.373 "data_size": 65536 00:21:26.373 }, 00:21:26.373 { 00:21:26.373 "name": "BaseBdev3", 00:21:26.373 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:26.373 "is_configured": true, 00:21:26.373 "data_offset": 0, 00:21:26.373 "data_size": 65536 00:21:26.373 } 00:21:26.373 ] 00:21:26.373 }' 00:21:26.373 14:23:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.373 14:23:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.373 14:23:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.666 14:23:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.666 14:23:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:26.666 [2024-11-18 14:23:18.694617] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:26.666 [2024-11-18 14:23:18.717956] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:26.666 [2024-11-18 14:23:18.718046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.935 14:23:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.935 "name": "raid_bdev1", 00:21:26.935 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:26.935 "strip_size_kb": 64, 00:21:26.935 "state": "online", 00:21:26.935 "raid_level": "raid5f", 00:21:26.935 "superblock": false, 00:21:26.935 "num_base_bdevs": 3, 00:21:26.935 "num_base_bdevs_discovered": 2, 00:21:26.935 "num_base_bdevs_operational": 2, 00:21:26.935 "base_bdevs_list": [ 00:21:26.935 { 00:21:26.936 "name": null, 00:21:26.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.936 "is_configured": false, 00:21:26.936 "data_offset": 0, 00:21:26.936 "data_size": 65536 00:21:26.936 }, 00:21:26.936 { 00:21:26.936 "name": "BaseBdev2", 00:21:26.936 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:26.936 "is_configured": true, 00:21:26.936 "data_offset": 0, 00:21:26.936 "data_size": 65536 00:21:26.936 }, 00:21:26.936 { 00:21:26.936 "name": "BaseBdev3", 00:21:26.936 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:26.936 "is_configured": true, 00:21:26.936 "data_offset": 0, 00:21:26.936 "data_size": 65536 00:21:26.936 } 00:21:26.936 ] 00:21:26.936 }' 00:21:26.936 14:23:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.936 14:23:18 -- common/autotest_common.sh@10 -- # set +x 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.503 14:23:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.761 14:23:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.761 "name": "raid_bdev1", 00:21:27.761 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:27.761 "strip_size_kb": 64, 00:21:27.761 "state": "online", 00:21:27.761 "raid_level": "raid5f", 00:21:27.761 "superblock": false, 00:21:27.761 "num_base_bdevs": 3, 00:21:27.761 "num_base_bdevs_discovered": 2, 00:21:27.761 "num_base_bdevs_operational": 2, 00:21:27.761 "base_bdevs_list": [ 00:21:27.761 { 00:21:27.761 "name": null, 00:21:27.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.761 "is_configured": false, 00:21:27.761 "data_offset": 0, 00:21:27.761 "data_size": 65536 00:21:27.761 }, 00:21:27.761 { 00:21:27.761 "name": "BaseBdev2", 00:21:27.761 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:27.761 "is_configured": true, 00:21:27.761 "data_offset": 0, 00:21:27.761 "data_size": 65536 00:21:27.761 }, 00:21:27.761 { 00:21:27.761 "name": "BaseBdev3", 00:21:27.761 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:27.761 "is_configured": true, 00:21:27.761 "data_offset": 0, 00:21:27.761 "data_size": 65536 00:21:27.761 } 00:21:27.761 ] 00:21:27.761 }' 00:21:27.761 14:23:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.021 14:23:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.021 14:23:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.021 14:23:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:28.021 14:23:19 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.021 [2024-11-18 14:23:20.083609] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:28.021 [2024-11-18 14:23:20.083643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.021 [2024-11-18 14:23:20.085351] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027b30 00:21:28.021 [2024-11-18 14:23:20.087565] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.279 14:23:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.214 14:23:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.474 "name": "raid_bdev1", 00:21:29.474 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:29.474 "strip_size_kb": 64, 00:21:29.474 "state": "online", 00:21:29.474 "raid_level": "raid5f", 00:21:29.474 "superblock": false, 00:21:29.474 "num_base_bdevs": 3, 00:21:29.474 "num_base_bdevs_discovered": 3, 00:21:29.474 "num_base_bdevs_operational": 3, 00:21:29.474 "process": { 00:21:29.474 "type": "rebuild", 00:21:29.474 "target": "spare", 00:21:29.474 "progress": { 00:21:29.474 "blocks": 24576, 00:21:29.474 "percent": 18 00:21:29.474 } 00:21:29.474 }, 00:21:29.474 "base_bdevs_list": [ 00:21:29.474 { 00:21:29.474 "name": "spare", 00:21:29.474 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:29.474 "is_configured": true, 00:21:29.474 "data_offset": 0, 00:21:29.474 "data_size": 65536 00:21:29.474 }, 00:21:29.474 { 00:21:29.474 "name": "BaseBdev2", 00:21:29.474 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:29.474 "is_configured": true, 00:21:29.474 "data_offset": 0, 00:21:29.474 "data_size": 65536 00:21:29.474 }, 00:21:29.474 { 00:21:29.474 "name": "BaseBdev3", 00:21:29.474 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:29.474 "is_configured": true, 00:21:29.474 "data_offset": 0, 00:21:29.474 "data_size": 65536 00:21:29.474 } 00:21:29.474 ] 00:21:29.474 }' 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@657 -- # local timeout=557 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.474 14:23:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.733 14:23:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.733 "name": "raid_bdev1", 00:21:29.733 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:29.733 "strip_size_kb": 64, 00:21:29.733 "state": "online", 00:21:29.733 "raid_level": "raid5f", 00:21:29.733 "superblock": false, 00:21:29.733 "num_base_bdevs": 3, 00:21:29.733 "num_base_bdevs_discovered": 3, 00:21:29.733 "num_base_bdevs_operational": 3, 00:21:29.733 "process": { 00:21:29.733 "type": "rebuild", 00:21:29.733 "target": "spare", 00:21:29.733 "progress": { 00:21:29.733 "blocks": 30720, 00:21:29.733 "percent": 23 00:21:29.733 } 00:21:29.733 }, 00:21:29.733 "base_bdevs_list": [ 00:21:29.733 { 00:21:29.733 "name": "spare", 00:21:29.733 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:29.733 "is_configured": true, 00:21:29.733 "data_offset": 0, 00:21:29.733 "data_size": 65536 00:21:29.733 }, 00:21:29.733 { 00:21:29.733 "name": "BaseBdev2", 00:21:29.733 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:29.733 "is_configured": true, 00:21:29.733 "data_offset": 0, 00:21:29.733 "data_size": 65536 00:21:29.733 }, 00:21:29.733 { 00:21:29.733 "name": "BaseBdev3", 00:21:29.733 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:29.733 "is_configured": true, 00:21:29.733 "data_offset": 0, 00:21:29.733 "data_size": 65536 00:21:29.733 } 00:21:29.733 ] 00:21:29.733 }' 00:21:29.733 14:23:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.733 14:23:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.733 14:23:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.733 14:23:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.733 14:23:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.112 14:23:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.112 14:23:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.112 "name": "raid_bdev1", 00:21:31.112 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:31.112 "strip_size_kb": 64, 00:21:31.112 "state": "online", 00:21:31.112 "raid_level": "raid5f", 00:21:31.112 "superblock": false, 00:21:31.112 "num_base_bdevs": 3, 00:21:31.112 "num_base_bdevs_discovered": 3, 00:21:31.112 "num_base_bdevs_operational": 3, 00:21:31.112 "process": { 00:21:31.112 "type": "rebuild", 00:21:31.112 "target": "spare", 00:21:31.112 "progress": { 00:21:31.112 "blocks": 59392, 00:21:31.112 "percent": 45 00:21:31.112 } 00:21:31.112 }, 00:21:31.112 "base_bdevs_list": [ 00:21:31.112 { 00:21:31.112 "name": "spare", 00:21:31.112 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:31.112 "is_configured": true, 00:21:31.112 "data_offset": 0, 00:21:31.112 "data_size": 65536 00:21:31.112 }, 00:21:31.112 { 00:21:31.112 "name": "BaseBdev2", 00:21:31.112 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:31.112 "is_configured": true, 00:21:31.112 "data_offset": 0, 00:21:31.112 "data_size": 65536 00:21:31.112 }, 00:21:31.112 { 00:21:31.112 "name": "BaseBdev3", 00:21:31.112 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:31.112 "is_configured": true, 00:21:31.112 "data_offset": 0, 00:21:31.112 "data_size": 65536 00:21:31.112 } 00:21:31.112 ] 00:21:31.112 }' 00:21:31.112 14:23:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.112 14:23:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.112 14:23:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.113 14:23:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.113 14:23:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.490 "name": "raid_bdev1", 00:21:32.490 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:32.490 "strip_size_kb": 64, 00:21:32.490 "state": "online", 00:21:32.490 "raid_level": "raid5f", 00:21:32.490 "superblock": false, 00:21:32.490 "num_base_bdevs": 3, 00:21:32.490 "num_base_bdevs_discovered": 3, 00:21:32.490 "num_base_bdevs_operational": 3, 00:21:32.490 "process": { 00:21:32.490 "type": "rebuild", 00:21:32.490 "target": "spare", 00:21:32.490 "progress": { 00:21:32.490 "blocks": 86016, 00:21:32.490 "percent": 65 00:21:32.490 } 00:21:32.490 }, 00:21:32.490 "base_bdevs_list": [ 00:21:32.490 { 00:21:32.490 "name": "spare", 00:21:32.490 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:32.490 "is_configured": true, 00:21:32.490 "data_offset": 0, 00:21:32.490 "data_size": 65536 00:21:32.490 }, 00:21:32.490 { 00:21:32.490 "name": "BaseBdev2", 00:21:32.490 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:32.490 "is_configured": true, 00:21:32.490 "data_offset": 0, 00:21:32.490 "data_size": 65536 00:21:32.490 }, 00:21:32.490 { 00:21:32.490 "name": "BaseBdev3", 00:21:32.490 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:32.490 "is_configured": true, 00:21:32.490 "data_offset": 0, 00:21:32.490 "data_size": 65536 00:21:32.490 } 00:21:32.490 ] 00:21:32.490 }' 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.490 14:23:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.427 14:23:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.686 14:23:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.686 "name": "raid_bdev1", 00:21:33.686 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:33.686 "strip_size_kb": 64, 00:21:33.686 "state": "online", 00:21:33.686 "raid_level": "raid5f", 00:21:33.686 "superblock": false, 00:21:33.686 "num_base_bdevs": 3, 00:21:33.686 "num_base_bdevs_discovered": 3, 00:21:33.686 "num_base_bdevs_operational": 3, 00:21:33.686 "process": { 00:21:33.686 "type": "rebuild", 00:21:33.686 "target": "spare", 00:21:33.686 "progress": { 00:21:33.686 "blocks": 112640, 00:21:33.686 "percent": 85 00:21:33.686 } 00:21:33.686 }, 00:21:33.686 "base_bdevs_list": [ 00:21:33.686 { 00:21:33.686 "name": "spare", 00:21:33.686 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:33.686 "is_configured": true, 00:21:33.686 "data_offset": 0, 00:21:33.686 "data_size": 65536 00:21:33.686 }, 00:21:33.686 { 00:21:33.686 "name": "BaseBdev2", 00:21:33.686 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:33.686 "is_configured": true, 00:21:33.686 "data_offset": 0, 00:21:33.686 "data_size": 65536 00:21:33.686 }, 00:21:33.686 { 00:21:33.686 "name": "BaseBdev3", 00:21:33.686 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:33.686 "is_configured": true, 00:21:33.686 "data_offset": 0, 00:21:33.686 "data_size": 65536 00:21:33.686 } 00:21:33.686 ] 00:21:33.686 }' 00:21:33.686 14:23:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.945 14:23:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.945 14:23:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.945 14:23:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.945 14:23:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:34.513 [2024-11-18 14:23:26.536132] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:34.513 [2024-11-18 14:23:26.536228] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:34.513 [2024-11-18 14:23:26.536335] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.773 14:23:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.032 14:23:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.032 "name": "raid_bdev1", 00:21:35.032 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:35.032 "strip_size_kb": 64, 00:21:35.032 "state": "online", 00:21:35.032 "raid_level": "raid5f", 00:21:35.032 "superblock": false, 00:21:35.032 "num_base_bdevs": 3, 00:21:35.032 "num_base_bdevs_discovered": 3, 00:21:35.032 "num_base_bdevs_operational": 3, 00:21:35.032 "base_bdevs_list": [ 00:21:35.032 { 00:21:35.032 "name": "spare", 00:21:35.032 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:35.032 "is_configured": true, 00:21:35.032 "data_offset": 0, 00:21:35.032 "data_size": 65536 00:21:35.032 }, 00:21:35.032 { 00:21:35.032 "name": "BaseBdev2", 00:21:35.032 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:35.032 "is_configured": true, 00:21:35.032 "data_offset": 0, 00:21:35.032 "data_size": 65536 00:21:35.032 }, 00:21:35.032 { 00:21:35.032 "name": "BaseBdev3", 00:21:35.032 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:35.032 "is_configured": true, 00:21:35.032 "data_offset": 0, 00:21:35.032 "data_size": 65536 00:21:35.032 } 00:21:35.032 ] 00:21:35.032 }' 00:21:35.032 14:23:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@660 -- # break 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.291 14:23:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.550 "name": "raid_bdev1", 00:21:35.550 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:35.550 "strip_size_kb": 64, 00:21:35.550 "state": "online", 00:21:35.550 "raid_level": "raid5f", 00:21:35.550 "superblock": false, 00:21:35.550 "num_base_bdevs": 3, 00:21:35.550 "num_base_bdevs_discovered": 3, 00:21:35.550 "num_base_bdevs_operational": 3, 00:21:35.550 "base_bdevs_list": [ 00:21:35.550 { 00:21:35.550 "name": "spare", 00:21:35.550 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:35.550 "is_configured": true, 00:21:35.550 "data_offset": 0, 00:21:35.550 "data_size": 65536 00:21:35.550 }, 00:21:35.550 { 00:21:35.550 "name": "BaseBdev2", 00:21:35.550 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:35.550 "is_configured": true, 00:21:35.550 "data_offset": 0, 00:21:35.550 "data_size": 65536 00:21:35.550 }, 00:21:35.550 { 00:21:35.550 "name": "BaseBdev3", 00:21:35.550 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:35.550 "is_configured": true, 00:21:35.550 "data_offset": 0, 00:21:35.550 "data_size": 65536 00:21:35.550 } 00:21:35.550 ] 00:21:35.550 }' 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.550 14:23:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.809 14:23:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.809 "name": "raid_bdev1", 00:21:35.809 "uuid": "bfbbee12-d53f-46f3-82c2-5719d2b79628", 00:21:35.809 "strip_size_kb": 64, 00:21:35.809 "state": "online", 00:21:35.809 "raid_level": "raid5f", 00:21:35.809 "superblock": false, 00:21:35.809 "num_base_bdevs": 3, 00:21:35.809 "num_base_bdevs_discovered": 3, 00:21:35.809 "num_base_bdevs_operational": 3, 00:21:35.809 "base_bdevs_list": [ 00:21:35.809 { 00:21:35.809 "name": "spare", 00:21:35.809 "uuid": "350f6b83-d5a4-52c8-b97e-4e04a925b3d6", 00:21:35.809 "is_configured": true, 00:21:35.809 "data_offset": 0, 00:21:35.809 "data_size": 65536 00:21:35.809 }, 00:21:35.809 { 00:21:35.809 "name": "BaseBdev2", 00:21:35.809 "uuid": "6f27c8c9-35df-4559-87c7-598959a66499", 00:21:35.809 "is_configured": true, 00:21:35.809 "data_offset": 0, 00:21:35.809 "data_size": 65536 00:21:35.809 }, 00:21:35.809 { 00:21:35.809 "name": "BaseBdev3", 00:21:35.809 "uuid": "bca8a68c-73b8-49e3-8798-d98fb1db9739", 00:21:35.809 "is_configured": true, 00:21:35.809 "data_offset": 0, 00:21:35.809 "data_size": 65536 00:21:35.809 } 00:21:35.809 ] 00:21:35.809 }' 00:21:35.809 14:23:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.809 14:23:27 -- common/autotest_common.sh@10 -- # set +x 00:21:36.376 14:23:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:36.376 [2024-11-18 14:23:28.432886] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.376 [2024-11-18 14:23:28.432920] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.376 [2024-11-18 14:23:28.433022] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.376 [2024-11-18 14:23:28.433108] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.376 [2024-11-18 14:23:28.433122] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:21:36.376 14:23:28 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.376 14:23:28 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:36.635 14:23:28 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:36.635 14:23:28 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:36.635 14:23:28 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:36.635 14:23:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:36.893 /dev/nbd0 00:21:36.893 14:23:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.893 14:23:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.893 14:23:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:36.893 14:23:28 -- common/autotest_common.sh@867 -- # local i 00:21:36.893 14:23:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:36.893 14:23:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:36.893 14:23:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:37.152 14:23:28 -- common/autotest_common.sh@871 -- # break 00:21:37.152 14:23:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:37.152 14:23:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:37.152 14:23:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.152 1+0 records in 00:21:37.152 1+0 records out 00:21:37.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040901 s, 10.0 MB/s 00:21:37.152 14:23:28 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.152 14:23:28 -- common/autotest_common.sh@884 -- # size=4096 00:21:37.152 14:23:28 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.152 14:23:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:37.152 14:23:28 -- common/autotest_common.sh@887 -- # return 0 00:21:37.152 14:23:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.152 14:23:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.152 14:23:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:37.152 /dev/nbd1 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.411 14:23:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:37.411 14:23:29 -- common/autotest_common.sh@867 -- # local i 00:21:37.411 14:23:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:37.411 14:23:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:37.411 14:23:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:37.411 14:23:29 -- common/autotest_common.sh@871 -- # break 00:21:37.411 14:23:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:37.411 14:23:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:37.411 14:23:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.411 1+0 records in 00:21:37.411 1+0 records out 00:21:37.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005349 s, 7.7 MB/s 00:21:37.411 14:23:29 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.411 14:23:29 -- common/autotest_common.sh@884 -- # size=4096 00:21:37.411 14:23:29 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.411 14:23:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:37.411 14:23:29 -- common/autotest_common.sh@887 -- # return 0 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:37.411 14:23:29 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:37.411 14:23:29 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.411 14:23:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@41 -- # break 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.670 14:23:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@41 -- # break 00:21:37.929 14:23:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.929 14:23:29 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:37.929 14:23:29 -- bdev/bdev_raid.sh@709 -- # killprocess 138136 00:21:37.929 14:23:29 -- common/autotest_common.sh@936 -- # '[' -z 138136 ']' 00:21:37.929 14:23:29 -- common/autotest_common.sh@940 -- # kill -0 138136 00:21:37.929 14:23:29 -- common/autotest_common.sh@941 -- # uname 00:21:37.929 14:23:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:37.929 14:23:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138136 00:21:37.929 14:23:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:37.929 14:23:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:37.929 14:23:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138136' 00:21:37.929 killing process with pid 138136 00:21:37.929 14:23:29 -- common/autotest_common.sh@955 -- # kill 138136 00:21:37.929 Received shutdown signal, test time was about 60.000000 seconds 00:21:37.929 00:21:37.929 Latency(us) 00:21:37.929 [2024-11-18T14:23:30.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.929 [2024-11-18T14:23:30.003Z] =================================================================================================================== 00:21:37.929 [2024-11-18T14:23:30.003Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:37.929 [2024-11-18 14:23:29.861509] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.929 14:23:29 -- common/autotest_common.sh@960 -- # wait 138136 00:21:37.929 [2024-11-18 14:23:29.908460] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.188 14:23:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:38.188 00:21:38.188 real 0m18.930s 00:21:38.188 user 0m28.819s 00:21:38.188 sys 0m2.389s 00:21:38.188 14:23:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:38.188 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:21:38.188 ************************************ 00:21:38.188 END TEST raid5f_rebuild_test 00:21:38.188 ************************************ 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:21:38.446 14:23:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:38.446 14:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.446 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:21:38.446 ************************************ 00:21:38.446 START TEST raid5f_rebuild_test_sb 00:21:38.446 ************************************ 00:21:38.446 14:23:30 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=138665 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138665 /var/tmp/spdk-raid.sock 00:21:38.446 14:23:30 -- common/autotest_common.sh@829 -- # '[' -z 138665 ']' 00:21:38.446 14:23:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:38.446 14:23:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:38.446 14:23:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.446 14:23:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:38.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:38.446 14:23:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.446 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:21:38.446 [2024-11-18 14:23:30.355928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:38.446 [2024-11-18 14:23:30.356093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138665 ] 00:21:38.447 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:38.447 Zero copy mechanism will not be used. 00:21:38.447 [2024-11-18 14:23:30.493087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.705 [2024-11-18 14:23:30.565397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.705 [2024-11-18 14:23:30.635749] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.272 14:23:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.272 14:23:31 -- common/autotest_common.sh@862 -- # return 0 00:21:39.272 14:23:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:39.272 14:23:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:39.272 14:23:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:39.531 BaseBdev1_malloc 00:21:39.531 14:23:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:39.789 [2024-11-18 14:23:31.771206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:39.789 [2024-11-18 14:23:31.771308] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.789 [2024-11-18 14:23:31.771346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:39.789 [2024-11-18 14:23:31.771389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.789 [2024-11-18 14:23:31.773776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.789 [2024-11-18 14:23:31.773832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:39.789 BaseBdev1 00:21:39.789 14:23:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:39.789 14:23:31 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:39.789 14:23:31 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:40.048 BaseBdev2_malloc 00:21:40.048 14:23:31 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:40.307 [2024-11-18 14:23:32.172334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:40.307 [2024-11-18 14:23:32.172403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.307 [2024-11-18 14:23:32.172440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:40.307 [2024-11-18 14:23:32.172482] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.307 [2024-11-18 14:23:32.174723] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.307 [2024-11-18 14:23:32.174771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:40.307 BaseBdev2 00:21:40.307 14:23:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:40.307 14:23:32 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:40.307 14:23:32 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:40.565 BaseBdev3_malloc 00:21:40.565 14:23:32 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:40.824 [2024-11-18 14:23:32.661890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:40.824 [2024-11-18 14:23:32.661959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.824 [2024-11-18 14:23:32.662005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:40.824 [2024-11-18 14:23:32.662050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.824 [2024-11-18 14:23:32.664002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.824 [2024-11-18 14:23:32.664052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:40.824 BaseBdev3 00:21:40.824 14:23:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:40.824 spare_malloc 00:21:40.824 14:23:32 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:41.083 spare_delay 00:21:41.083 14:23:33 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:41.341 [2024-11-18 14:23:33.251639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:41.341 [2024-11-18 14:23:33.251723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.341 [2024-11-18 14:23:33.251758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:41.341 [2024-11-18 14:23:33.251801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.341 [2024-11-18 14:23:33.254054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.341 [2024-11-18 14:23:33.254105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:41.341 spare 00:21:41.341 14:23:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:21:41.600 [2024-11-18 14:23:33.491777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.600 [2024-11-18 14:23:33.493737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.600 [2024-11-18 14:23:33.493809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:41.600 [2024-11-18 14:23:33.493991] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:41.600 [2024-11-18 14:23:33.494005] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:41.600 [2024-11-18 14:23:33.494153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:41.600 [2024-11-18 14:23:33.494858] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:41.600 [2024-11-18 14:23:33.494880] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:41.600 [2024-11-18 14:23:33.495010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.600 14:23:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.859 14:23:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:41.859 "name": "raid_bdev1", 00:21:41.859 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:41.859 "strip_size_kb": 64, 00:21:41.859 "state": "online", 00:21:41.859 "raid_level": "raid5f", 00:21:41.859 "superblock": true, 00:21:41.859 "num_base_bdevs": 3, 00:21:41.859 "num_base_bdevs_discovered": 3, 00:21:41.859 "num_base_bdevs_operational": 3, 00:21:41.859 "base_bdevs_list": [ 00:21:41.859 { 00:21:41.859 "name": "BaseBdev1", 00:21:41.859 "uuid": "c9dcc0e5-ecef-5a07-9341-427dda22b103", 00:21:41.859 "is_configured": true, 00:21:41.859 "data_offset": 2048, 00:21:41.859 "data_size": 63488 00:21:41.859 }, 00:21:41.859 { 00:21:41.859 "name": "BaseBdev2", 00:21:41.859 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:41.859 "is_configured": true, 00:21:41.859 "data_offset": 2048, 00:21:41.859 "data_size": 63488 00:21:41.859 }, 00:21:41.859 { 00:21:41.859 "name": "BaseBdev3", 00:21:41.859 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:41.859 "is_configured": true, 00:21:41.859 "data_offset": 2048, 00:21:41.859 "data_size": 63488 00:21:41.859 } 00:21:41.859 ] 00:21:41.859 }' 00:21:41.859 14:23:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:41.859 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:21:42.434 14:23:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:42.434 14:23:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:42.692 [2024-11-18 14:23:34.525284] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.692 14:23:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:21:42.692 14:23:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.692 14:23:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:42.952 14:23:34 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:42.952 14:23:34 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:42.952 14:23:34 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:42.952 14:23:34 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@12 -- # local i 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:42.952 14:23:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:42.952 [2024-11-18 14:23:34.973276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:42.952 /dev/nbd0 00:21:42.952 14:23:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:42.952 14:23:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:42.952 14:23:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:42.952 14:23:35 -- common/autotest_common.sh@867 -- # local i 00:21:42.952 14:23:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:42.952 14:23:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:42.952 14:23:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:42.952 14:23:35 -- common/autotest_common.sh@871 -- # break 00:21:42.952 14:23:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:42.952 14:23:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:42.952 14:23:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:43.211 1+0 records in 00:21:43.211 1+0 records out 00:21:43.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651147 s, 6.3 MB/s 00:21:43.211 14:23:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:43.211 14:23:35 -- common/autotest_common.sh@884 -- # size=4096 00:21:43.211 14:23:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:43.211 14:23:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:43.211 14:23:35 -- common/autotest_common.sh@887 -- # return 0 00:21:43.211 14:23:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:43.211 14:23:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:43.211 14:23:35 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:21:43.211 14:23:35 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:21:43.211 14:23:35 -- bdev/bdev_raid.sh@582 -- # echo 128 00:21:43.211 14:23:35 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:21:43.470 496+0 records in 00:21:43.470 496+0 records out 00:21:43.470 65011712 bytes (65 MB, 62 MiB) copied, 0.359479 s, 181 MB/s 00:21:43.470 14:23:35 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:43.470 14:23:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:43.470 14:23:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:43.470 14:23:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:43.471 14:23:35 -- bdev/nbd_common.sh@51 -- # local i 00:21:43.471 14:23:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:43.471 14:23:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:43.729 [2024-11-18 14:23:35.677654] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@41 -- # break 00:21:43.729 14:23:35 -- bdev/nbd_common.sh@45 -- # return 0 00:21:43.729 14:23:35 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:43.988 [2024-11-18 14:23:35.849269] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.988 14:23:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.988 14:23:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.988 "name": "raid_bdev1", 00:21:43.988 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:43.988 "strip_size_kb": 64, 00:21:43.988 "state": "online", 00:21:43.988 "raid_level": "raid5f", 00:21:43.988 "superblock": true, 00:21:43.988 "num_base_bdevs": 3, 00:21:43.988 "num_base_bdevs_discovered": 2, 00:21:43.988 "num_base_bdevs_operational": 2, 00:21:43.988 "base_bdevs_list": [ 00:21:43.988 { 00:21:43.988 "name": null, 00:21:43.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.988 "is_configured": false, 00:21:43.988 "data_offset": 2048, 00:21:43.988 "data_size": 63488 00:21:43.988 }, 00:21:43.988 { 00:21:43.988 "name": "BaseBdev2", 00:21:43.988 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:43.988 "is_configured": true, 00:21:43.988 "data_offset": 2048, 00:21:43.988 "data_size": 63488 00:21:43.988 }, 00:21:43.988 { 00:21:43.988 "name": "BaseBdev3", 00:21:43.988 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:43.988 "is_configured": true, 00:21:43.988 "data_offset": 2048, 00:21:43.988 "data_size": 63488 00:21:43.988 } 00:21:43.988 ] 00:21:43.988 }' 00:21:43.988 14:23:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.988 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:21:44.926 14:23:36 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:44.926 [2024-11-18 14:23:36.941461] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:44.926 [2024-11-18 14:23:36.941645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:44.926 [2024-11-18 14:23:36.947738] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:21:44.926 [2024-11-18 14:23:36.950267] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.926 14:23:36 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.303 14:23:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.303 14:23:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:46.303 "name": "raid_bdev1", 00:21:46.303 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:46.303 "strip_size_kb": 64, 00:21:46.303 "state": "online", 00:21:46.303 "raid_level": "raid5f", 00:21:46.303 "superblock": true, 00:21:46.303 "num_base_bdevs": 3, 00:21:46.303 "num_base_bdevs_discovered": 3, 00:21:46.303 "num_base_bdevs_operational": 3, 00:21:46.303 "process": { 00:21:46.303 "type": "rebuild", 00:21:46.303 "target": "spare", 00:21:46.303 "progress": { 00:21:46.303 "blocks": 24576, 00:21:46.303 "percent": 19 00:21:46.303 } 00:21:46.303 }, 00:21:46.303 "base_bdevs_list": [ 00:21:46.303 { 00:21:46.303 "name": "spare", 00:21:46.303 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:46.303 "is_configured": true, 00:21:46.303 "data_offset": 2048, 00:21:46.303 "data_size": 63488 00:21:46.303 }, 00:21:46.303 { 00:21:46.303 "name": "BaseBdev2", 00:21:46.304 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:46.304 "is_configured": true, 00:21:46.304 "data_offset": 2048, 00:21:46.304 "data_size": 63488 00:21:46.304 }, 00:21:46.304 { 00:21:46.304 "name": "BaseBdev3", 00:21:46.304 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:46.304 "is_configured": true, 00:21:46.304 "data_offset": 2048, 00:21:46.304 "data_size": 63488 00:21:46.304 } 00:21:46.304 ] 00:21:46.304 }' 00:21:46.304 14:23:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:46.304 14:23:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.304 14:23:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:46.304 14:23:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.304 14:23:38 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:46.563 [2024-11-18 14:23:38.472245] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.563 [2024-11-18 14:23:38.564559] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:46.563 [2024-11-18 14:23:38.564768] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.563 14:23:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.822 14:23:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.822 "name": "raid_bdev1", 00:21:46.822 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:46.822 "strip_size_kb": 64, 00:21:46.822 "state": "online", 00:21:46.822 "raid_level": "raid5f", 00:21:46.822 "superblock": true, 00:21:46.822 "num_base_bdevs": 3, 00:21:46.822 "num_base_bdevs_discovered": 2, 00:21:46.822 "num_base_bdevs_operational": 2, 00:21:46.822 "base_bdevs_list": [ 00:21:46.822 { 00:21:46.822 "name": null, 00:21:46.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.822 "is_configured": false, 00:21:46.822 "data_offset": 2048, 00:21:46.822 "data_size": 63488 00:21:46.822 }, 00:21:46.822 { 00:21:46.822 "name": "BaseBdev2", 00:21:46.822 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:46.822 "is_configured": true, 00:21:46.822 "data_offset": 2048, 00:21:46.822 "data_size": 63488 00:21:46.822 }, 00:21:46.822 { 00:21:46.822 "name": "BaseBdev3", 00:21:46.822 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:46.822 "is_configured": true, 00:21:46.822 "data_offset": 2048, 00:21:46.822 "data_size": 63488 00:21:46.822 } 00:21:46.822 ] 00:21:46.822 }' 00:21:46.822 14:23:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.822 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:47.757 "name": "raid_bdev1", 00:21:47.757 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:47.757 "strip_size_kb": 64, 00:21:47.757 "state": "online", 00:21:47.757 "raid_level": "raid5f", 00:21:47.757 "superblock": true, 00:21:47.757 "num_base_bdevs": 3, 00:21:47.757 "num_base_bdevs_discovered": 2, 00:21:47.757 "num_base_bdevs_operational": 2, 00:21:47.757 "base_bdevs_list": [ 00:21:47.757 { 00:21:47.757 "name": null, 00:21:47.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.757 "is_configured": false, 00:21:47.757 "data_offset": 2048, 00:21:47.757 "data_size": 63488 00:21:47.757 }, 00:21:47.757 { 00:21:47.757 "name": "BaseBdev2", 00:21:47.757 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:47.757 "is_configured": true, 00:21:47.757 "data_offset": 2048, 00:21:47.757 "data_size": 63488 00:21:47.757 }, 00:21:47.757 { 00:21:47.757 "name": "BaseBdev3", 00:21:47.757 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:47.757 "is_configured": true, 00:21:47.757 "data_offset": 2048, 00:21:47.757 "data_size": 63488 00:21:47.757 } 00:21:47.757 ] 00:21:47.757 }' 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:47.757 14:23:39 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:47.758 14:23:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:47.758 14:23:39 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:47.758 14:23:39 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:48.019 [2024-11-18 14:23:39.951109] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:48.019 [2024-11-18 14:23:39.951294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.019 [2024-11-18 14:23:39.953432] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:21:48.019 [2024-11-18 14:23:39.955517] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.019 14:23:39 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.951 14:23:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.209 14:23:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.209 "name": "raid_bdev1", 00:21:49.209 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:49.209 "strip_size_kb": 64, 00:21:49.209 "state": "online", 00:21:49.209 "raid_level": "raid5f", 00:21:49.209 "superblock": true, 00:21:49.209 "num_base_bdevs": 3, 00:21:49.209 "num_base_bdevs_discovered": 3, 00:21:49.209 "num_base_bdevs_operational": 3, 00:21:49.209 "process": { 00:21:49.209 "type": "rebuild", 00:21:49.209 "target": "spare", 00:21:49.209 "progress": { 00:21:49.209 "blocks": 24576, 00:21:49.209 "percent": 19 00:21:49.209 } 00:21:49.209 }, 00:21:49.209 "base_bdevs_list": [ 00:21:49.209 { 00:21:49.209 "name": "spare", 00:21:49.209 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:49.209 "is_configured": true, 00:21:49.209 "data_offset": 2048, 00:21:49.209 "data_size": 63488 00:21:49.209 }, 00:21:49.209 { 00:21:49.209 "name": "BaseBdev2", 00:21:49.209 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:49.209 "is_configured": true, 00:21:49.209 "data_offset": 2048, 00:21:49.209 "data_size": 63488 00:21:49.209 }, 00:21:49.209 { 00:21:49.209 "name": "BaseBdev3", 00:21:49.209 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:49.209 "is_configured": true, 00:21:49.209 "data_offset": 2048, 00:21:49.209 "data_size": 63488 00:21:49.209 } 00:21:49.209 ] 00:21:49.209 }' 00:21:49.209 14:23:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.209 14:23:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.209 14:23:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:49.468 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@657 -- # local timeout=577 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.468 "name": "raid_bdev1", 00:21:49.468 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:49.468 "strip_size_kb": 64, 00:21:49.468 "state": "online", 00:21:49.468 "raid_level": "raid5f", 00:21:49.468 "superblock": true, 00:21:49.468 "num_base_bdevs": 3, 00:21:49.468 "num_base_bdevs_discovered": 3, 00:21:49.468 "num_base_bdevs_operational": 3, 00:21:49.468 "process": { 00:21:49.468 "type": "rebuild", 00:21:49.468 "target": "spare", 00:21:49.468 "progress": { 00:21:49.468 "blocks": 30720, 00:21:49.468 "percent": 24 00:21:49.468 } 00:21:49.468 }, 00:21:49.468 "base_bdevs_list": [ 00:21:49.468 { 00:21:49.468 "name": "spare", 00:21:49.468 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:49.468 "is_configured": true, 00:21:49.468 "data_offset": 2048, 00:21:49.468 "data_size": 63488 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "name": "BaseBdev2", 00:21:49.468 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:49.468 "is_configured": true, 00:21:49.468 "data_offset": 2048, 00:21:49.468 "data_size": 63488 00:21:49.468 }, 00:21:49.468 { 00:21:49.468 "name": "BaseBdev3", 00:21:49.468 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:49.468 "is_configured": true, 00:21:49.468 "data_offset": 2048, 00:21:49.468 "data_size": 63488 00:21:49.468 } 00:21:49.468 ] 00:21:49.468 }' 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.468 14:23:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.727 14:23:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.727 14:23:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.664 14:23:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.923 14:23:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.923 "name": "raid_bdev1", 00:21:50.923 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:50.923 "strip_size_kb": 64, 00:21:50.923 "state": "online", 00:21:50.923 "raid_level": "raid5f", 00:21:50.923 "superblock": true, 00:21:50.923 "num_base_bdevs": 3, 00:21:50.923 "num_base_bdevs_discovered": 3, 00:21:50.923 "num_base_bdevs_operational": 3, 00:21:50.923 "process": { 00:21:50.923 "type": "rebuild", 00:21:50.923 "target": "spare", 00:21:50.923 "progress": { 00:21:50.923 "blocks": 57344, 00:21:50.923 "percent": 45 00:21:50.923 } 00:21:50.923 }, 00:21:50.923 "base_bdevs_list": [ 00:21:50.923 { 00:21:50.923 "name": "spare", 00:21:50.923 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:50.923 "is_configured": true, 00:21:50.923 "data_offset": 2048, 00:21:50.923 "data_size": 63488 00:21:50.923 }, 00:21:50.923 { 00:21:50.923 "name": "BaseBdev2", 00:21:50.923 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:50.923 "is_configured": true, 00:21:50.923 "data_offset": 2048, 00:21:50.923 "data_size": 63488 00:21:50.923 }, 00:21:50.923 { 00:21:50.923 "name": "BaseBdev3", 00:21:50.923 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:50.923 "is_configured": true, 00:21:50.923 "data_offset": 2048, 00:21:50.923 "data_size": 63488 00:21:50.923 } 00:21:50.923 ] 00:21:50.923 }' 00:21:50.923 14:23:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.923 14:23:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.923 14:23:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.923 14:23:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.924 14:23:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.302 14:23:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.302 14:23:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.302 "name": "raid_bdev1", 00:21:52.302 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:52.302 "strip_size_kb": 64, 00:21:52.302 "state": "online", 00:21:52.302 "raid_level": "raid5f", 00:21:52.302 "superblock": true, 00:21:52.302 "num_base_bdevs": 3, 00:21:52.302 "num_base_bdevs_discovered": 3, 00:21:52.302 "num_base_bdevs_operational": 3, 00:21:52.302 "process": { 00:21:52.302 "type": "rebuild", 00:21:52.302 "target": "spare", 00:21:52.302 "progress": { 00:21:52.302 "blocks": 83968, 00:21:52.302 "percent": 66 00:21:52.302 } 00:21:52.302 }, 00:21:52.302 "base_bdevs_list": [ 00:21:52.302 { 00:21:52.302 "name": "spare", 00:21:52.302 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:52.302 "is_configured": true, 00:21:52.302 "data_offset": 2048, 00:21:52.302 "data_size": 63488 00:21:52.302 }, 00:21:52.302 { 00:21:52.302 "name": "BaseBdev2", 00:21:52.302 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:52.302 "is_configured": true, 00:21:52.302 "data_offset": 2048, 00:21:52.302 "data_size": 63488 00:21:52.302 }, 00:21:52.302 { 00:21:52.302 "name": "BaseBdev3", 00:21:52.302 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:52.302 "is_configured": true, 00:21:52.302 "data_offset": 2048, 00:21:52.302 "data_size": 63488 00:21:52.302 } 00:21:52.302 ] 00:21:52.302 }' 00:21:52.302 14:23:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.302 14:23:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.302 14:23:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.302 14:23:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.302 14:23:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.239 14:23:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.497 14:23:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.497 "name": "raid_bdev1", 00:21:53.497 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:53.497 "strip_size_kb": 64, 00:21:53.497 "state": "online", 00:21:53.497 "raid_level": "raid5f", 00:21:53.497 "superblock": true, 00:21:53.497 "num_base_bdevs": 3, 00:21:53.497 "num_base_bdevs_discovered": 3, 00:21:53.497 "num_base_bdevs_operational": 3, 00:21:53.497 "process": { 00:21:53.497 "type": "rebuild", 00:21:53.497 "target": "spare", 00:21:53.497 "progress": { 00:21:53.497 "blocks": 110592, 00:21:53.497 "percent": 87 00:21:53.497 } 00:21:53.497 }, 00:21:53.497 "base_bdevs_list": [ 00:21:53.497 { 00:21:53.497 "name": "spare", 00:21:53.497 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:53.497 "is_configured": true, 00:21:53.497 "data_offset": 2048, 00:21:53.497 "data_size": 63488 00:21:53.497 }, 00:21:53.497 { 00:21:53.497 "name": "BaseBdev2", 00:21:53.497 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:53.497 "is_configured": true, 00:21:53.497 "data_offset": 2048, 00:21:53.497 "data_size": 63488 00:21:53.497 }, 00:21:53.497 { 00:21:53.497 "name": "BaseBdev3", 00:21:53.497 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:53.497 "is_configured": true, 00:21:53.497 "data_offset": 2048, 00:21:53.497 "data_size": 63488 00:21:53.497 } 00:21:53.497 ] 00:21:53.497 }' 00:21:53.497 14:23:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.497 14:23:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.497 14:23:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.756 14:23:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.756 14:23:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:54.352 [2024-11-18 14:23:46.208431] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:54.352 [2024-11-18 14:23:46.208629] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:54.352 [2024-11-18 14:23:46.208880] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.643 14:23:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.906 "name": "raid_bdev1", 00:21:54.906 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:54.906 "strip_size_kb": 64, 00:21:54.906 "state": "online", 00:21:54.906 "raid_level": "raid5f", 00:21:54.906 "superblock": true, 00:21:54.906 "num_base_bdevs": 3, 00:21:54.906 "num_base_bdevs_discovered": 3, 00:21:54.906 "num_base_bdevs_operational": 3, 00:21:54.906 "base_bdevs_list": [ 00:21:54.906 { 00:21:54.906 "name": "spare", 00:21:54.906 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:54.906 "is_configured": true, 00:21:54.906 "data_offset": 2048, 00:21:54.906 "data_size": 63488 00:21:54.906 }, 00:21:54.906 { 00:21:54.906 "name": "BaseBdev2", 00:21:54.906 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:54.906 "is_configured": true, 00:21:54.906 "data_offset": 2048, 00:21:54.906 "data_size": 63488 00:21:54.906 }, 00:21:54.906 { 00:21:54.906 "name": "BaseBdev3", 00:21:54.906 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:54.906 "is_configured": true, 00:21:54.906 "data_offset": 2048, 00:21:54.906 "data_size": 63488 00:21:54.906 } 00:21:54.906 ] 00:21:54.906 }' 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@660 -- # break 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.906 14:23:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.163 14:23:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:55.163 "name": "raid_bdev1", 00:21:55.163 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:55.163 "strip_size_kb": 64, 00:21:55.163 "state": "online", 00:21:55.163 "raid_level": "raid5f", 00:21:55.163 "superblock": true, 00:21:55.163 "num_base_bdevs": 3, 00:21:55.163 "num_base_bdevs_discovered": 3, 00:21:55.163 "num_base_bdevs_operational": 3, 00:21:55.163 "base_bdevs_list": [ 00:21:55.163 { 00:21:55.163 "name": "spare", 00:21:55.163 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:55.163 "is_configured": true, 00:21:55.163 "data_offset": 2048, 00:21:55.163 "data_size": 63488 00:21:55.163 }, 00:21:55.163 { 00:21:55.163 "name": "BaseBdev2", 00:21:55.163 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:55.163 "is_configured": true, 00:21:55.163 "data_offset": 2048, 00:21:55.163 "data_size": 63488 00:21:55.163 }, 00:21:55.163 { 00:21:55.163 "name": "BaseBdev3", 00:21:55.163 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:55.163 "is_configured": true, 00:21:55.163 "data_offset": 2048, 00:21:55.163 "data_size": 63488 00:21:55.163 } 00:21:55.163 ] 00:21:55.163 }' 00:21:55.163 14:23:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.422 "name": "raid_bdev1", 00:21:55.422 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:55.422 "strip_size_kb": 64, 00:21:55.422 "state": "online", 00:21:55.422 "raid_level": "raid5f", 00:21:55.422 "superblock": true, 00:21:55.422 "num_base_bdevs": 3, 00:21:55.422 "num_base_bdevs_discovered": 3, 00:21:55.422 "num_base_bdevs_operational": 3, 00:21:55.422 "base_bdevs_list": [ 00:21:55.422 { 00:21:55.422 "name": "spare", 00:21:55.422 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:55.422 "is_configured": true, 00:21:55.422 "data_offset": 2048, 00:21:55.422 "data_size": 63488 00:21:55.422 }, 00:21:55.422 { 00:21:55.422 "name": "BaseBdev2", 00:21:55.422 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:55.422 "is_configured": true, 00:21:55.422 "data_offset": 2048, 00:21:55.422 "data_size": 63488 00:21:55.422 }, 00:21:55.422 { 00:21:55.422 "name": "BaseBdev3", 00:21:55.422 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:55.422 "is_configured": true, 00:21:55.422 "data_offset": 2048, 00:21:55.422 "data_size": 63488 00:21:55.422 } 00:21:55.422 ] 00:21:55.422 }' 00:21:55.422 14:23:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.422 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:21:56.358 14:23:48 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:56.358 [2024-11-18 14:23:48.316427] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.358 [2024-11-18 14:23:48.316570] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.358 [2024-11-18 14:23:48.316756] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.358 [2024-11-18 14:23:48.316964] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.358 [2024-11-18 14:23:48.317073] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:21:56.358 14:23:48 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:56.358 14:23:48 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.618 14:23:48 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:56.618 14:23:48 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:56.618 14:23:48 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@12 -- # local i 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.618 14:23:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:56.877 /dev/nbd0 00:21:56.877 14:23:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.877 14:23:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.877 14:23:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:56.877 14:23:48 -- common/autotest_common.sh@867 -- # local i 00:21:56.877 14:23:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:56.877 14:23:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:56.877 14:23:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:56.877 14:23:48 -- common/autotest_common.sh@871 -- # break 00:21:56.877 14:23:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:56.877 14:23:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:56.877 14:23:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.877 1+0 records in 00:21:56.877 1+0 records out 00:21:56.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270689 s, 15.1 MB/s 00:21:56.877 14:23:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.877 14:23:48 -- common/autotest_common.sh@884 -- # size=4096 00:21:56.877 14:23:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.877 14:23:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:56.877 14:23:48 -- common/autotest_common.sh@887 -- # return 0 00:21:56.877 14:23:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.877 14:23:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.877 14:23:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:57.141 /dev/nbd1 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:57.141 14:23:49 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:57.141 14:23:49 -- common/autotest_common.sh@867 -- # local i 00:21:57.141 14:23:49 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:57.141 14:23:49 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:57.141 14:23:49 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:57.141 14:23:49 -- common/autotest_common.sh@871 -- # break 00:21:57.141 14:23:49 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:57.141 14:23:49 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:57.141 14:23:49 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.141 1+0 records in 00:21:57.141 1+0 records out 00:21:57.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257866 s, 15.9 MB/s 00:21:57.141 14:23:49 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.141 14:23:49 -- common/autotest_common.sh@884 -- # size=4096 00:21:57.141 14:23:49 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.141 14:23:49 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:57.141 14:23:49 -- common/autotest_common.sh@887 -- # return 0 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.141 14:23:49 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:57.141 14:23:49 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@51 -- # local i 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.141 14:23:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@41 -- # break 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.399 14:23:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@41 -- # break 00:21:57.658 14:23:49 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.658 14:23:49 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:57.658 14:23:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:57.658 14:23:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:57.658 14:23:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:57.917 14:23:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:58.178 [2024-11-18 14:23:50.043298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:58.178 [2024-11-18 14:23:50.043387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.178 [2024-11-18 14:23:50.043441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:58.178 [2024-11-18 14:23:50.043470] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.178 [2024-11-18 14:23:50.045531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.178 [2024-11-18 14:23:50.045596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:58.178 [2024-11-18 14:23:50.045677] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:58.178 [2024-11-18 14:23:50.045744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:58.178 BaseBdev1 00:21:58.178 14:23:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:58.178 14:23:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:58.178 14:23:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:58.438 14:23:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:58.438 [2024-11-18 14:23:50.483375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:58.438 [2024-11-18 14:23:50.483428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.438 [2024-11-18 14:23:50.483462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:58.438 [2024-11-18 14:23:50.483504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.438 [2024-11-18 14:23:50.483831] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.438 [2024-11-18 14:23:50.483882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:58.438 [2024-11-18 14:23:50.483947] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:58.438 [2024-11-18 14:23:50.483961] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:58.438 [2024-11-18 14:23:50.483968] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:58.438 [2024-11-18 14:23:50.483994] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:21:58.438 [2024-11-18 14:23:50.484031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:58.438 BaseBdev2 00:21:58.438 14:23:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:58.438 14:23:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:58.438 14:23:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:58.697 14:23:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:58.955 [2024-11-18 14:23:50.863441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:58.955 [2024-11-18 14:23:50.863494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.955 [2024-11-18 14:23:50.863530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:58.955 [2024-11-18 14:23:50.863561] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.955 [2024-11-18 14:23:50.863886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.955 [2024-11-18 14:23:50.863940] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:58.955 [2024-11-18 14:23:50.863999] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:58.955 [2024-11-18 14:23:50.864026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:58.955 BaseBdev3 00:21:58.955 14:23:50 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:59.213 14:23:51 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:59.213 [2024-11-18 14:23:51.231494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:59.213 [2024-11-18 14:23:51.231548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.214 [2024-11-18 14:23:51.231580] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:59.214 [2024-11-18 14:23:51.231609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.214 [2024-11-18 14:23:51.231954] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.214 [2024-11-18 14:23:51.232009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:59.214 [2024-11-18 14:23:51.232089] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:59.214 [2024-11-18 14:23:51.232122] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.214 spare 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.214 14:23:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.472 [2024-11-18 14:23:51.332209] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:21:59.472 [2024-11-18 14:23:51.332229] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:59.472 [2024-11-18 14:23:51.332337] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044230 00:21:59.472 [2024-11-18 14:23:51.332978] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:21:59.472 [2024-11-18 14:23:51.332993] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:21:59.472 [2024-11-18 14:23:51.333111] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.472 14:23:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:59.472 "name": "raid_bdev1", 00:21:59.472 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:21:59.472 "strip_size_kb": 64, 00:21:59.472 "state": "online", 00:21:59.472 "raid_level": "raid5f", 00:21:59.472 "superblock": true, 00:21:59.472 "num_base_bdevs": 3, 00:21:59.472 "num_base_bdevs_discovered": 3, 00:21:59.472 "num_base_bdevs_operational": 3, 00:21:59.472 "base_bdevs_list": [ 00:21:59.472 { 00:21:59.472 "name": "spare", 00:21:59.472 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:21:59.472 "is_configured": true, 00:21:59.472 "data_offset": 2048, 00:21:59.472 "data_size": 63488 00:21:59.472 }, 00:21:59.472 { 00:21:59.472 "name": "BaseBdev2", 00:21:59.473 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:21:59.473 "is_configured": true, 00:21:59.473 "data_offset": 2048, 00:21:59.473 "data_size": 63488 00:21:59.473 }, 00:21:59.473 { 00:21:59.473 "name": "BaseBdev3", 00:21:59.473 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:21:59.473 "is_configured": true, 00:21:59.473 "data_offset": 2048, 00:21:59.473 "data_size": 63488 00:21:59.473 } 00:21:59.473 ] 00:21:59.473 }' 00:21:59.473 14:23:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:59.473 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.040 14:23:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.298 14:23:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:00.298 "name": "raid_bdev1", 00:22:00.298 "uuid": "50a51c5e-5679-4ce8-bb20-4ae511c5157e", 00:22:00.298 "strip_size_kb": 64, 00:22:00.298 "state": "online", 00:22:00.298 "raid_level": "raid5f", 00:22:00.298 "superblock": true, 00:22:00.298 "num_base_bdevs": 3, 00:22:00.298 "num_base_bdevs_discovered": 3, 00:22:00.298 "num_base_bdevs_operational": 3, 00:22:00.298 "base_bdevs_list": [ 00:22:00.298 { 00:22:00.298 "name": "spare", 00:22:00.298 "uuid": "b383b69e-a3c4-5fb4-9cab-26b3ba97f2c3", 00:22:00.298 "is_configured": true, 00:22:00.298 "data_offset": 2048, 00:22:00.298 "data_size": 63488 00:22:00.298 }, 00:22:00.298 { 00:22:00.298 "name": "BaseBdev2", 00:22:00.298 "uuid": "7a41889c-d7a6-5d95-8fa5-c3195b627435", 00:22:00.299 "is_configured": true, 00:22:00.299 "data_offset": 2048, 00:22:00.299 "data_size": 63488 00:22:00.299 }, 00:22:00.299 { 00:22:00.299 "name": "BaseBdev3", 00:22:00.299 "uuid": "cda5ce74-3c23-5ea7-b62c-42b9ba417fde", 00:22:00.299 "is_configured": true, 00:22:00.299 "data_offset": 2048, 00:22:00.299 "data_size": 63488 00:22:00.299 } 00:22:00.299 ] 00:22:00.299 }' 00:22:00.299 14:23:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:00.299 14:23:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:00.299 14:23:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:00.557 14:23:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:00.557 14:23:52 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:00.557 14:23:52 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.816 14:23:52 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.816 14:23:52 -- bdev/bdev_raid.sh@709 -- # killprocess 138665 00:22:00.816 14:23:52 -- common/autotest_common.sh@936 -- # '[' -z 138665 ']' 00:22:00.816 14:23:52 -- common/autotest_common.sh@940 -- # kill -0 138665 00:22:00.816 14:23:52 -- common/autotest_common.sh@941 -- # uname 00:22:00.816 14:23:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.816 14:23:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138665 00:22:00.816 killing process with pid 138665 00:22:00.816 Received shutdown signal, test time was about 60.000000 seconds 00:22:00.816 00:22:00.816 Latency(us) 00:22:00.816 [2024-11-18T14:23:52.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.816 [2024-11-18T14:23:52.890Z] =================================================================================================================== 00:22:00.816 [2024-11-18T14:23:52.890Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:00.816 14:23:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:00.816 14:23:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:00.816 14:23:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138665' 00:22:00.816 14:23:52 -- common/autotest_common.sh@955 -- # kill 138665 00:22:00.816 [2024-11-18 14:23:52.652016] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.816 14:23:52 -- common/autotest_common.sh@960 -- # wait 138665 00:22:00.816 [2024-11-18 14:23:52.652080] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:00.816 [2024-11-18 14:23:52.652148] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:00.816 [2024-11-18 14:23:52.652158] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:00.816 [2024-11-18 14:23:52.698198] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.075 ************************************ 00:22:01.075 END TEST raid5f_rebuild_test_sb 00:22:01.075 ************************************ 00:22:01.075 14:23:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:01.075 00:22:01.075 real 0m22.690s 00:22:01.075 user 0m35.878s 00:22:01.075 sys 0m2.781s 00:22:01.075 14:23:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:01.075 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:01.075 14:23:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:01.075 14:23:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.075 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:22:01.075 ************************************ 00:22:01.075 START TEST raid5f_state_function_test 00:22:01.075 ************************************ 00:22:01.075 14:23:53 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=139279 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139279' 00:22:01.075 Process raid pid: 139279 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139279 /var/tmp/spdk-raid.sock 00:22:01.075 14:23:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:01.075 14:23:53 -- common/autotest_common.sh@829 -- # '[' -z 139279 ']' 00:22:01.075 14:23:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:01.075 14:23:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:01.075 14:23:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:01.075 14:23:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.075 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:22:01.075 [2024-11-18 14:23:53.108424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:01.075 [2024-11-18 14:23:53.108612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.334 [2024-11-18 14:23:53.254378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.334 [2024-11-18 14:23:53.332915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.592 [2024-11-18 14:23:53.410926] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.159 14:23:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.160 14:23:54 -- common/autotest_common.sh@862 -- # return 0 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:02.160 [2024-11-18 14:23:54.214784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:02.160 [2024-11-18 14:23:54.214860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:02.160 [2024-11-18 14:23:54.214873] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:02.160 [2024-11-18 14:23:54.214891] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:02.160 [2024-11-18 14:23:54.214898] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:02.160 [2024-11-18 14:23:54.214936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:02.160 [2024-11-18 14:23:54.214945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:02.160 [2024-11-18 14:23:54.214969] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.160 14:23:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.418 14:23:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.418 14:23:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.418 "name": "Existed_Raid", 00:22:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.418 "strip_size_kb": 64, 00:22:02.418 "state": "configuring", 00:22:02.418 "raid_level": "raid5f", 00:22:02.418 "superblock": false, 00:22:02.418 "num_base_bdevs": 4, 00:22:02.418 "num_base_bdevs_discovered": 0, 00:22:02.418 "num_base_bdevs_operational": 4, 00:22:02.418 "base_bdevs_list": [ 00:22:02.418 { 00:22:02.418 "name": "BaseBdev1", 00:22:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.418 "is_configured": false, 00:22:02.418 "data_offset": 0, 00:22:02.418 "data_size": 0 00:22:02.418 }, 00:22:02.418 { 00:22:02.418 "name": "BaseBdev2", 00:22:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.418 "is_configured": false, 00:22:02.418 "data_offset": 0, 00:22:02.418 "data_size": 0 00:22:02.418 }, 00:22:02.418 { 00:22:02.418 "name": "BaseBdev3", 00:22:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.418 "is_configured": false, 00:22:02.418 "data_offset": 0, 00:22:02.418 "data_size": 0 00:22:02.418 }, 00:22:02.418 { 00:22:02.418 "name": "BaseBdev4", 00:22:02.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.418 "is_configured": false, 00:22:02.418 "data_offset": 0, 00:22:02.418 "data_size": 0 00:22:02.418 } 00:22:02.418 ] 00:22:02.418 }' 00:22:02.418 14:23:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.418 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:22:03.353 14:23:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:03.353 [2024-11-18 14:23:55.230789] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:03.353 [2024-11-18 14:23:55.230826] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:03.353 14:23:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:03.612 [2024-11-18 14:23:55.458847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:03.612 [2024-11-18 14:23:55.458887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:03.612 [2024-11-18 14:23:55.458896] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:03.612 [2024-11-18 14:23:55.458919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:03.612 [2024-11-18 14:23:55.458927] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:03.612 [2024-11-18 14:23:55.458942] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:03.612 [2024-11-18 14:23:55.458948] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:03.612 [2024-11-18 14:23:55.458970] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:03.612 14:23:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:03.612 [2024-11-18 14:23:55.656459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.612 BaseBdev1 00:22:03.612 14:23:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:03.612 14:23:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:03.612 14:23:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:03.612 14:23:55 -- common/autotest_common.sh@899 -- # local i 00:22:03.612 14:23:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:03.612 14:23:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:03.612 14:23:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:03.870 14:23:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:04.129 [ 00:22:04.129 { 00:22:04.129 "name": "BaseBdev1", 00:22:04.129 "aliases": [ 00:22:04.129 "7a20e808-a0f7-48de-9d83-5d075664af18" 00:22:04.129 ], 00:22:04.129 "product_name": "Malloc disk", 00:22:04.129 "block_size": 512, 00:22:04.129 "num_blocks": 65536, 00:22:04.129 "uuid": "7a20e808-a0f7-48de-9d83-5d075664af18", 00:22:04.129 "assigned_rate_limits": { 00:22:04.129 "rw_ios_per_sec": 0, 00:22:04.129 "rw_mbytes_per_sec": 0, 00:22:04.129 "r_mbytes_per_sec": 0, 00:22:04.129 "w_mbytes_per_sec": 0 00:22:04.129 }, 00:22:04.129 "claimed": true, 00:22:04.129 "claim_type": "exclusive_write", 00:22:04.129 "zoned": false, 00:22:04.129 "supported_io_types": { 00:22:04.129 "read": true, 00:22:04.129 "write": true, 00:22:04.129 "unmap": true, 00:22:04.129 "write_zeroes": true, 00:22:04.129 "flush": true, 00:22:04.129 "reset": true, 00:22:04.129 "compare": false, 00:22:04.129 "compare_and_write": false, 00:22:04.129 "abort": true, 00:22:04.129 "nvme_admin": false, 00:22:04.129 "nvme_io": false 00:22:04.129 }, 00:22:04.129 "memory_domains": [ 00:22:04.129 { 00:22:04.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.129 "dma_device_type": 2 00:22:04.129 } 00:22:04.129 ], 00:22:04.129 "driver_specific": {} 00:22:04.129 } 00:22:04.129 ] 00:22:04.129 14:23:56 -- common/autotest_common.sh@905 -- # return 0 00:22:04.129 14:23:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.130 14:23:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.388 14:23:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.388 "name": "Existed_Raid", 00:22:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.388 "strip_size_kb": 64, 00:22:04.388 "state": "configuring", 00:22:04.388 "raid_level": "raid5f", 00:22:04.388 "superblock": false, 00:22:04.388 "num_base_bdevs": 4, 00:22:04.388 "num_base_bdevs_discovered": 1, 00:22:04.388 "num_base_bdevs_operational": 4, 00:22:04.388 "base_bdevs_list": [ 00:22:04.388 { 00:22:04.388 "name": "BaseBdev1", 00:22:04.388 "uuid": "7a20e808-a0f7-48de-9d83-5d075664af18", 00:22:04.388 "is_configured": true, 00:22:04.388 "data_offset": 0, 00:22:04.388 "data_size": 65536 00:22:04.388 }, 00:22:04.388 { 00:22:04.388 "name": "BaseBdev2", 00:22:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.388 "is_configured": false, 00:22:04.388 "data_offset": 0, 00:22:04.388 "data_size": 0 00:22:04.388 }, 00:22:04.388 { 00:22:04.388 "name": "BaseBdev3", 00:22:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.388 "is_configured": false, 00:22:04.388 "data_offset": 0, 00:22:04.388 "data_size": 0 00:22:04.388 }, 00:22:04.388 { 00:22:04.388 "name": "BaseBdev4", 00:22:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.388 "is_configured": false, 00:22:04.388 "data_offset": 0, 00:22:04.388 "data_size": 0 00:22:04.388 } 00:22:04.388 ] 00:22:04.388 }' 00:22:04.388 14:23:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.388 14:23:56 -- common/autotest_common.sh@10 -- # set +x 00:22:04.955 14:23:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:05.214 [2024-11-18 14:23:57.120700] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:05.214 [2024-11-18 14:23:57.120749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:05.214 14:23:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:05.214 14:23:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:05.473 [2024-11-18 14:23:57.376825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:05.473 [2024-11-18 14:23:57.378769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.473 [2024-11-18 14:23:57.378838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.473 [2024-11-18 14:23:57.378848] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.473 [2024-11-18 14:23:57.378872] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.473 [2024-11-18 14:23:57.378879] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:05.473 [2024-11-18 14:23:57.378896] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.473 14:23:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.732 14:23:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.732 "name": "Existed_Raid", 00:22:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.732 "strip_size_kb": 64, 00:22:05.732 "state": "configuring", 00:22:05.732 "raid_level": "raid5f", 00:22:05.732 "superblock": false, 00:22:05.732 "num_base_bdevs": 4, 00:22:05.732 "num_base_bdevs_discovered": 1, 00:22:05.732 "num_base_bdevs_operational": 4, 00:22:05.732 "base_bdevs_list": [ 00:22:05.732 { 00:22:05.732 "name": "BaseBdev1", 00:22:05.732 "uuid": "7a20e808-a0f7-48de-9d83-5d075664af18", 00:22:05.732 "is_configured": true, 00:22:05.732 "data_offset": 0, 00:22:05.732 "data_size": 65536 00:22:05.732 }, 00:22:05.732 { 00:22:05.732 "name": "BaseBdev2", 00:22:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.732 "is_configured": false, 00:22:05.732 "data_offset": 0, 00:22:05.732 "data_size": 0 00:22:05.732 }, 00:22:05.732 { 00:22:05.732 "name": "BaseBdev3", 00:22:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.732 "is_configured": false, 00:22:05.732 "data_offset": 0, 00:22:05.732 "data_size": 0 00:22:05.732 }, 00:22:05.732 { 00:22:05.732 "name": "BaseBdev4", 00:22:05.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.732 "is_configured": false, 00:22:05.732 "data_offset": 0, 00:22:05.732 "data_size": 0 00:22:05.732 } 00:22:05.732 ] 00:22:05.732 }' 00:22:05.732 14:23:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.732 14:23:57 -- common/autotest_common.sh@10 -- # set +x 00:22:06.300 14:23:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:06.300 [2024-11-18 14:23:58.336269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:06.300 BaseBdev2 00:22:06.300 14:23:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:06.300 14:23:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:06.300 14:23:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:06.300 14:23:58 -- common/autotest_common.sh@899 -- # local i 00:22:06.300 14:23:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:06.300 14:23:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:06.300 14:23:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:06.559 14:23:58 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:06.818 [ 00:22:06.818 { 00:22:06.818 "name": "BaseBdev2", 00:22:06.818 "aliases": [ 00:22:06.818 "f2a85cbf-1526-46d1-a7f9-227cd1031835" 00:22:06.818 ], 00:22:06.818 "product_name": "Malloc disk", 00:22:06.818 "block_size": 512, 00:22:06.818 "num_blocks": 65536, 00:22:06.818 "uuid": "f2a85cbf-1526-46d1-a7f9-227cd1031835", 00:22:06.818 "assigned_rate_limits": { 00:22:06.818 "rw_ios_per_sec": 0, 00:22:06.818 "rw_mbytes_per_sec": 0, 00:22:06.818 "r_mbytes_per_sec": 0, 00:22:06.818 "w_mbytes_per_sec": 0 00:22:06.818 }, 00:22:06.818 "claimed": true, 00:22:06.818 "claim_type": "exclusive_write", 00:22:06.818 "zoned": false, 00:22:06.818 "supported_io_types": { 00:22:06.818 "read": true, 00:22:06.818 "write": true, 00:22:06.818 "unmap": true, 00:22:06.818 "write_zeroes": true, 00:22:06.818 "flush": true, 00:22:06.818 "reset": true, 00:22:06.818 "compare": false, 00:22:06.818 "compare_and_write": false, 00:22:06.818 "abort": true, 00:22:06.818 "nvme_admin": false, 00:22:06.818 "nvme_io": false 00:22:06.818 }, 00:22:06.818 "memory_domains": [ 00:22:06.818 { 00:22:06.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.818 "dma_device_type": 2 00:22:06.818 } 00:22:06.818 ], 00:22:06.818 "driver_specific": {} 00:22:06.818 } 00:22:06.818 ] 00:22:06.818 14:23:58 -- common/autotest_common.sh@905 -- # return 0 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.818 14:23:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.077 14:23:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.077 "name": "Existed_Raid", 00:22:07.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.077 "strip_size_kb": 64, 00:22:07.077 "state": "configuring", 00:22:07.077 "raid_level": "raid5f", 00:22:07.077 "superblock": false, 00:22:07.077 "num_base_bdevs": 4, 00:22:07.077 "num_base_bdevs_discovered": 2, 00:22:07.077 "num_base_bdevs_operational": 4, 00:22:07.077 "base_bdevs_list": [ 00:22:07.077 { 00:22:07.077 "name": "BaseBdev1", 00:22:07.077 "uuid": "7a20e808-a0f7-48de-9d83-5d075664af18", 00:22:07.077 "is_configured": true, 00:22:07.077 "data_offset": 0, 00:22:07.077 "data_size": 65536 00:22:07.077 }, 00:22:07.077 { 00:22:07.077 "name": "BaseBdev2", 00:22:07.077 "uuid": "f2a85cbf-1526-46d1-a7f9-227cd1031835", 00:22:07.077 "is_configured": true, 00:22:07.077 "data_offset": 0, 00:22:07.077 "data_size": 65536 00:22:07.077 }, 00:22:07.077 { 00:22:07.077 "name": "BaseBdev3", 00:22:07.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.077 "is_configured": false, 00:22:07.077 "data_offset": 0, 00:22:07.077 "data_size": 0 00:22:07.077 }, 00:22:07.077 { 00:22:07.077 "name": "BaseBdev4", 00:22:07.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.077 "is_configured": false, 00:22:07.077 "data_offset": 0, 00:22:07.077 "data_size": 0 00:22:07.077 } 00:22:07.077 ] 00:22:07.077 }' 00:22:07.077 14:23:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.077 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:22:07.644 14:23:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:07.903 [2024-11-18 14:23:59.831874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:07.903 BaseBdev3 00:22:07.903 14:23:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:07.903 14:23:59 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:07.903 14:23:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:07.903 14:23:59 -- common/autotest_common.sh@899 -- # local i 00:22:07.903 14:23:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:07.903 14:23:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:07.903 14:23:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.162 14:24:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:08.421 [ 00:22:08.421 { 00:22:08.421 "name": "BaseBdev3", 00:22:08.421 "aliases": [ 00:22:08.421 "cd50defc-b983-4d09-8192-78e50d30f353" 00:22:08.421 ], 00:22:08.421 "product_name": "Malloc disk", 00:22:08.421 "block_size": 512, 00:22:08.421 "num_blocks": 65536, 00:22:08.421 "uuid": "cd50defc-b983-4d09-8192-78e50d30f353", 00:22:08.421 "assigned_rate_limits": { 00:22:08.421 "rw_ios_per_sec": 0, 00:22:08.421 "rw_mbytes_per_sec": 0, 00:22:08.421 "r_mbytes_per_sec": 0, 00:22:08.421 "w_mbytes_per_sec": 0 00:22:08.421 }, 00:22:08.421 "claimed": true, 00:22:08.421 "claim_type": "exclusive_write", 00:22:08.421 "zoned": false, 00:22:08.421 "supported_io_types": { 00:22:08.421 "read": true, 00:22:08.421 "write": true, 00:22:08.421 "unmap": true, 00:22:08.421 "write_zeroes": true, 00:22:08.421 "flush": true, 00:22:08.421 "reset": true, 00:22:08.421 "compare": false, 00:22:08.421 "compare_and_write": false, 00:22:08.421 "abort": true, 00:22:08.421 "nvme_admin": false, 00:22:08.421 "nvme_io": false 00:22:08.421 }, 00:22:08.421 "memory_domains": [ 00:22:08.421 { 00:22:08.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.421 "dma_device_type": 2 00:22:08.421 } 00:22:08.421 ], 00:22:08.421 "driver_specific": {} 00:22:08.421 } 00:22:08.421 ] 00:22:08.421 14:24:00 -- common/autotest_common.sh@905 -- # return 0 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.421 14:24:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.679 14:24:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.680 "name": "Existed_Raid", 00:22:08.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.680 "strip_size_kb": 64, 00:22:08.680 "state": "configuring", 00:22:08.680 "raid_level": "raid5f", 00:22:08.680 "superblock": false, 00:22:08.680 "num_base_bdevs": 4, 00:22:08.680 "num_base_bdevs_discovered": 3, 00:22:08.680 "num_base_bdevs_operational": 4, 00:22:08.680 "base_bdevs_list": [ 00:22:08.680 { 00:22:08.680 "name": "BaseBdev1", 00:22:08.680 "uuid": "7a20e808-a0f7-48de-9d83-5d075664af18", 00:22:08.680 "is_configured": true, 00:22:08.680 "data_offset": 0, 00:22:08.680 "data_size": 65536 00:22:08.680 }, 00:22:08.680 { 00:22:08.680 "name": "BaseBdev2", 00:22:08.680 "uuid": "f2a85cbf-1526-46d1-a7f9-227cd1031835", 00:22:08.680 "is_configured": true, 00:22:08.680 "data_offset": 0, 00:22:08.680 "data_size": 65536 00:22:08.680 }, 00:22:08.680 { 00:22:08.680 "name": "BaseBdev3", 00:22:08.680 "uuid": "cd50defc-b983-4d09-8192-78e50d30f353", 00:22:08.680 "is_configured": true, 00:22:08.680 "data_offset": 0, 00:22:08.680 "data_size": 65536 00:22:08.680 }, 00:22:08.680 { 00:22:08.680 "name": "BaseBdev4", 00:22:08.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.680 "is_configured": false, 00:22:08.680 "data_offset": 0, 00:22:08.680 "data_size": 0 00:22:08.680 } 00:22:08.680 ] 00:22:08.680 }' 00:22:08.680 14:24:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.680 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:22:09.246 14:24:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:09.504 [2024-11-18 14:24:01.407611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.504 [2024-11-18 14:24:01.407680] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:09.504 [2024-11-18 14:24:01.407692] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:09.504 [2024-11-18 14:24:01.407831] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:22:09.504 [2024-11-18 14:24:01.408694] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:09.504 [2024-11-18 14:24:01.408717] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:09.504 [2024-11-18 14:24:01.408920] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.504 BaseBdev4 00:22:09.504 14:24:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:09.504 14:24:01 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:09.504 14:24:01 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:09.504 14:24:01 -- common/autotest_common.sh@899 -- # local i 00:22:09.504 14:24:01 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:09.504 14:24:01 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:09.505 14:24:01 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:09.764 14:24:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.764 [ 00:22:09.764 { 00:22:09.764 "name": "BaseBdev4", 00:22:09.764 "aliases": [ 00:22:09.764 "b1833255-e3e7-4bbc-a4dc-9ef5fa1a2e1f" 00:22:09.764 ], 00:22:09.764 "product_name": "Malloc disk", 00:22:09.764 "block_size": 512, 00:22:09.764 "num_blocks": 65536, 00:22:09.764 "uuid": "b1833255-e3e7-4bbc-a4dc-9ef5fa1a2e1f", 00:22:09.764 "assigned_rate_limits": { 00:22:09.764 "rw_ios_per_sec": 0, 00:22:09.764 "rw_mbytes_per_sec": 0, 00:22:09.764 "r_mbytes_per_sec": 0, 00:22:09.764 "w_mbytes_per_sec": 0 00:22:09.764 }, 00:22:09.764 "claimed": true, 00:22:09.764 "claim_type": "exclusive_write", 00:22:09.764 "zoned": false, 00:22:09.764 "supported_io_types": { 00:22:09.764 "read": true, 00:22:09.764 "write": true, 00:22:09.764 "unmap": true, 00:22:09.764 "write_zeroes": true, 00:22:09.764 "flush": true, 00:22:09.764 "reset": true, 00:22:09.764 "compare": false, 00:22:09.764 "compare_and_write": false, 00:22:09.764 "abort": true, 00:22:09.764 "nvme_admin": false, 00:22:09.764 "nvme_io": false 00:22:09.764 }, 00:22:09.764 "memory_domains": [ 00:22:09.764 { 00:22:09.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.764 "dma_device_type": 2 00:22:09.764 } 00:22:09.764 ], 00:22:09.764 "driver_specific": {} 00:22:09.764 } 00:22:09.764 ] 00:22:09.764 14:24:01 -- common/autotest_common.sh@905 -- # return 0 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.764 14:24:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.023 14:24:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.023 "name": "Existed_Raid", 00:22:10.023 "uuid": "7e043e1a-6141-4a43-b37e-f39e1261797d", 00:22:10.023 "strip_size_kb": 64, 00:22:10.023 "state": "online", 00:22:10.023 "raid_level": "raid5f", 00:22:10.023 "superblock": false, 00:22:10.023 "num_base_bdevs": 4, 00:22:10.023 "num_base_bdevs_discovered": 4, 00:22:10.023 "num_base_bdevs_operational": 4, 00:22:10.023 "base_bdevs_list": [ 00:22:10.023 { 00:22:10.023 "name": "BaseBdev1", 00:22:10.023 "uuid": "7a20e808-a0f7-48de-9d83-5d075664af18", 00:22:10.023 "is_configured": true, 00:22:10.023 "data_offset": 0, 00:22:10.023 "data_size": 65536 00:22:10.023 }, 00:22:10.023 { 00:22:10.023 "name": "BaseBdev2", 00:22:10.023 "uuid": "f2a85cbf-1526-46d1-a7f9-227cd1031835", 00:22:10.023 "is_configured": true, 00:22:10.023 "data_offset": 0, 00:22:10.023 "data_size": 65536 00:22:10.023 }, 00:22:10.023 { 00:22:10.023 "name": "BaseBdev3", 00:22:10.023 "uuid": "cd50defc-b983-4d09-8192-78e50d30f353", 00:22:10.023 "is_configured": true, 00:22:10.023 "data_offset": 0, 00:22:10.023 "data_size": 65536 00:22:10.023 }, 00:22:10.023 { 00:22:10.023 "name": "BaseBdev4", 00:22:10.023 "uuid": "b1833255-e3e7-4bbc-a4dc-9ef5fa1a2e1f", 00:22:10.023 "is_configured": true, 00:22:10.023 "data_offset": 0, 00:22:10.023 "data_size": 65536 00:22:10.023 } 00:22:10.023 ] 00:22:10.023 }' 00:22:10.023 14:24:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.023 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:22:10.591 14:24:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:10.850 [2024-11-18 14:24:02.807972] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.850 14:24:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.109 14:24:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.109 "name": "Existed_Raid", 00:22:11.109 "uuid": "7e043e1a-6141-4a43-b37e-f39e1261797d", 00:22:11.109 "strip_size_kb": 64, 00:22:11.109 "state": "online", 00:22:11.109 "raid_level": "raid5f", 00:22:11.109 "superblock": false, 00:22:11.109 "num_base_bdevs": 4, 00:22:11.109 "num_base_bdevs_discovered": 3, 00:22:11.109 "num_base_bdevs_operational": 3, 00:22:11.109 "base_bdevs_list": [ 00:22:11.109 { 00:22:11.109 "name": null, 00:22:11.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.109 "is_configured": false, 00:22:11.109 "data_offset": 0, 00:22:11.109 "data_size": 65536 00:22:11.109 }, 00:22:11.109 { 00:22:11.109 "name": "BaseBdev2", 00:22:11.109 "uuid": "f2a85cbf-1526-46d1-a7f9-227cd1031835", 00:22:11.109 "is_configured": true, 00:22:11.109 "data_offset": 0, 00:22:11.109 "data_size": 65536 00:22:11.109 }, 00:22:11.109 { 00:22:11.109 "name": "BaseBdev3", 00:22:11.109 "uuid": "cd50defc-b983-4d09-8192-78e50d30f353", 00:22:11.109 "is_configured": true, 00:22:11.109 "data_offset": 0, 00:22:11.109 "data_size": 65536 00:22:11.109 }, 00:22:11.109 { 00:22:11.109 "name": "BaseBdev4", 00:22:11.109 "uuid": "b1833255-e3e7-4bbc-a4dc-9ef5fa1a2e1f", 00:22:11.109 "is_configured": true, 00:22:11.109 "data_offset": 0, 00:22:11.109 "data_size": 65536 00:22:11.109 } 00:22:11.109 ] 00:22:11.109 }' 00:22:11.109 14:24:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.109 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:22:11.677 14:24:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:11.677 14:24:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:11.677 14:24:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:11.677 14:24:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.936 14:24:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:11.936 14:24:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:11.936 14:24:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:12.195 [2024-11-18 14:24:04.161155] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:12.195 [2024-11-18 14:24:04.161186] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.195 [2024-11-18 14:24:04.161263] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.195 14:24:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:12.195 14:24:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:12.195 14:24:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:12.195 14:24:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.454 14:24:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:12.454 14:24:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:12.454 14:24:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:12.712 [2024-11-18 14:24:04.654150] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:12.712 14:24:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:12.712 14:24:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:12.712 14:24:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.712 14:24:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:12.971 14:24:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:12.971 14:24:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:12.971 14:24:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:12.971 [2024-11-18 14:24:05.039119] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:12.971 [2024-11-18 14:24:05.039192] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:13.230 14:24:05 -- bdev/bdev_raid.sh@287 -- # killprocess 139279 00:22:13.230 14:24:05 -- common/autotest_common.sh@936 -- # '[' -z 139279 ']' 00:22:13.230 14:24:05 -- common/autotest_common.sh@940 -- # kill -0 139279 00:22:13.230 14:24:05 -- common/autotest_common.sh@941 -- # uname 00:22:13.230 14:24:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:13.230 14:24:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139279 00:22:13.488 killing process with pid 139279 00:22:13.488 14:24:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:13.488 14:24:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:13.488 14:24:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139279' 00:22:13.488 14:24:05 -- common/autotest_common.sh@955 -- # kill 139279 00:22:13.488 14:24:05 -- common/autotest_common.sh@960 -- # wait 139279 00:22:13.489 [2024-11-18 14:24:05.312810] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:13.489 [2024-11-18 14:24:05.312886] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.748 ************************************ 00:22:13.748 END TEST raid5f_state_function_test 00:22:13.748 ************************************ 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:13.748 00:22:13.748 real 0m12.548s 00:22:13.748 user 0m23.101s 00:22:13.748 sys 0m1.581s 00:22:13.748 14:24:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:13.748 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:22:13.748 14:24:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:13.748 14:24:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:13.748 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:22:13.748 ************************************ 00:22:13.748 START TEST raid5f_state_function_test_sb 00:22:13.748 ************************************ 00:22:13.748 14:24:05 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=139700 00:22:13.748 Process raid pid: 139700 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 139700' 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:13.748 14:24:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 139700 /var/tmp/spdk-raid.sock 00:22:13.748 14:24:05 -- common/autotest_common.sh@829 -- # '[' -z 139700 ']' 00:22:13.748 14:24:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:13.748 14:24:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:13.748 14:24:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:13.748 14:24:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.748 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:22:13.748 [2024-11-18 14:24:05.719674] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:13.748 [2024-11-18 14:24:05.719837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.007 [2024-11-18 14:24:05.857322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.007 [2024-11-18 14:24:05.924983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.007 [2024-11-18 14:24:05.994091] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.944 14:24:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.944 14:24:06 -- common/autotest_common.sh@862 -- # return 0 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:14.944 [2024-11-18 14:24:06.844026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:14.944 [2024-11-18 14:24:06.844107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:14.944 [2024-11-18 14:24:06.844120] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:14.944 [2024-11-18 14:24:06.844139] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:14.944 [2024-11-18 14:24:06.844146] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:14.944 [2024-11-18 14:24:06.844184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:14.944 [2024-11-18 14:24:06.844192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:14.944 [2024-11-18 14:24:06.844217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.944 14:24:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.203 14:24:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.203 "name": "Existed_Raid", 00:22:15.203 "uuid": "051614b4-e512-4be7-9daf-a060048b68f5", 00:22:15.203 "strip_size_kb": 64, 00:22:15.203 "state": "configuring", 00:22:15.203 "raid_level": "raid5f", 00:22:15.203 "superblock": true, 00:22:15.203 "num_base_bdevs": 4, 00:22:15.203 "num_base_bdevs_discovered": 0, 00:22:15.203 "num_base_bdevs_operational": 4, 00:22:15.203 "base_bdevs_list": [ 00:22:15.203 { 00:22:15.203 "name": "BaseBdev1", 00:22:15.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.203 "is_configured": false, 00:22:15.203 "data_offset": 0, 00:22:15.203 "data_size": 0 00:22:15.203 }, 00:22:15.203 { 00:22:15.203 "name": "BaseBdev2", 00:22:15.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.203 "is_configured": false, 00:22:15.203 "data_offset": 0, 00:22:15.203 "data_size": 0 00:22:15.203 }, 00:22:15.203 { 00:22:15.203 "name": "BaseBdev3", 00:22:15.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.203 "is_configured": false, 00:22:15.203 "data_offset": 0, 00:22:15.203 "data_size": 0 00:22:15.203 }, 00:22:15.203 { 00:22:15.203 "name": "BaseBdev4", 00:22:15.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.203 "is_configured": false, 00:22:15.203 "data_offset": 0, 00:22:15.203 "data_size": 0 00:22:15.203 } 00:22:15.203 ] 00:22:15.203 }' 00:22:15.203 14:24:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.203 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.771 14:24:07 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:16.030 [2024-11-18 14:24:07.924008] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:16.030 [2024-11-18 14:24:07.924046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:16.030 14:24:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:16.289 [2024-11-18 14:24:08.112079] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:16.289 [2024-11-18 14:24:08.112130] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:16.289 [2024-11-18 14:24:08.112139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:16.289 [2024-11-18 14:24:08.112162] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:16.289 [2024-11-18 14:24:08.112170] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:16.289 [2024-11-18 14:24:08.112185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:16.289 [2024-11-18 14:24:08.112191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:16.289 [2024-11-18 14:24:08.112214] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:16.289 14:24:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:16.289 [2024-11-18 14:24:08.297701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:16.289 BaseBdev1 00:22:16.289 14:24:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:16.289 14:24:08 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:16.289 14:24:08 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:16.289 14:24:08 -- common/autotest_common.sh@899 -- # local i 00:22:16.289 14:24:08 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:16.289 14:24:08 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:16.289 14:24:08 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:16.551 14:24:08 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:16.808 [ 00:22:16.808 { 00:22:16.808 "name": "BaseBdev1", 00:22:16.808 "aliases": [ 00:22:16.808 "8a546e6f-307b-44fa-9569-2df3bf905031" 00:22:16.808 ], 00:22:16.808 "product_name": "Malloc disk", 00:22:16.808 "block_size": 512, 00:22:16.808 "num_blocks": 65536, 00:22:16.808 "uuid": "8a546e6f-307b-44fa-9569-2df3bf905031", 00:22:16.808 "assigned_rate_limits": { 00:22:16.808 "rw_ios_per_sec": 0, 00:22:16.808 "rw_mbytes_per_sec": 0, 00:22:16.808 "r_mbytes_per_sec": 0, 00:22:16.808 "w_mbytes_per_sec": 0 00:22:16.808 }, 00:22:16.808 "claimed": true, 00:22:16.808 "claim_type": "exclusive_write", 00:22:16.808 "zoned": false, 00:22:16.808 "supported_io_types": { 00:22:16.808 "read": true, 00:22:16.808 "write": true, 00:22:16.808 "unmap": true, 00:22:16.808 "write_zeroes": true, 00:22:16.808 "flush": true, 00:22:16.808 "reset": true, 00:22:16.808 "compare": false, 00:22:16.808 "compare_and_write": false, 00:22:16.808 "abort": true, 00:22:16.808 "nvme_admin": false, 00:22:16.808 "nvme_io": false 00:22:16.808 }, 00:22:16.808 "memory_domains": [ 00:22:16.808 { 00:22:16.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.808 "dma_device_type": 2 00:22:16.808 } 00:22:16.808 ], 00:22:16.808 "driver_specific": {} 00:22:16.808 } 00:22:16.808 ] 00:22:16.808 14:24:08 -- common/autotest_common.sh@905 -- # return 0 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.808 14:24:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.066 14:24:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.066 "name": "Existed_Raid", 00:22:17.066 "uuid": "f838f71d-abf5-4747-91ad-faa8644fb1e0", 00:22:17.066 "strip_size_kb": 64, 00:22:17.066 "state": "configuring", 00:22:17.066 "raid_level": "raid5f", 00:22:17.066 "superblock": true, 00:22:17.066 "num_base_bdevs": 4, 00:22:17.066 "num_base_bdevs_discovered": 1, 00:22:17.066 "num_base_bdevs_operational": 4, 00:22:17.066 "base_bdevs_list": [ 00:22:17.066 { 00:22:17.066 "name": "BaseBdev1", 00:22:17.066 "uuid": "8a546e6f-307b-44fa-9569-2df3bf905031", 00:22:17.066 "is_configured": true, 00:22:17.066 "data_offset": 2048, 00:22:17.066 "data_size": 63488 00:22:17.066 }, 00:22:17.066 { 00:22:17.066 "name": "BaseBdev2", 00:22:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.066 "is_configured": false, 00:22:17.066 "data_offset": 0, 00:22:17.066 "data_size": 0 00:22:17.066 }, 00:22:17.066 { 00:22:17.066 "name": "BaseBdev3", 00:22:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.066 "is_configured": false, 00:22:17.066 "data_offset": 0, 00:22:17.066 "data_size": 0 00:22:17.066 }, 00:22:17.066 { 00:22:17.066 "name": "BaseBdev4", 00:22:17.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.066 "is_configured": false, 00:22:17.066 "data_offset": 0, 00:22:17.066 "data_size": 0 00:22:17.066 } 00:22:17.066 ] 00:22:17.066 }' 00:22:17.066 14:24:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.066 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:22:17.633 14:24:09 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:17.891 [2024-11-18 14:24:09.761930] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:17.891 [2024-11-18 14:24:09.761972] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:17.891 14:24:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:17.891 14:24:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:18.150 14:24:10 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:18.408 BaseBdev1 00:22:18.408 14:24:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:18.408 14:24:10 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:18.408 14:24:10 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:18.408 14:24:10 -- common/autotest_common.sh@899 -- # local i 00:22:18.408 14:24:10 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:18.408 14:24:10 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:18.408 14:24:10 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.408 14:24:10 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:18.667 [ 00:22:18.667 { 00:22:18.667 "name": "BaseBdev1", 00:22:18.667 "aliases": [ 00:22:18.667 "90fb05b6-925c-4c3a-862f-1a1b3592aa1b" 00:22:18.667 ], 00:22:18.667 "product_name": "Malloc disk", 00:22:18.667 "block_size": 512, 00:22:18.667 "num_blocks": 65536, 00:22:18.667 "uuid": "90fb05b6-925c-4c3a-862f-1a1b3592aa1b", 00:22:18.667 "assigned_rate_limits": { 00:22:18.667 "rw_ios_per_sec": 0, 00:22:18.667 "rw_mbytes_per_sec": 0, 00:22:18.667 "r_mbytes_per_sec": 0, 00:22:18.667 "w_mbytes_per_sec": 0 00:22:18.667 }, 00:22:18.667 "claimed": false, 00:22:18.667 "zoned": false, 00:22:18.667 "supported_io_types": { 00:22:18.667 "read": true, 00:22:18.667 "write": true, 00:22:18.667 "unmap": true, 00:22:18.667 "write_zeroes": true, 00:22:18.667 "flush": true, 00:22:18.667 "reset": true, 00:22:18.667 "compare": false, 00:22:18.667 "compare_and_write": false, 00:22:18.667 "abort": true, 00:22:18.667 "nvme_admin": false, 00:22:18.667 "nvme_io": false 00:22:18.667 }, 00:22:18.667 "memory_domains": [ 00:22:18.667 { 00:22:18.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.667 "dma_device_type": 2 00:22:18.667 } 00:22:18.667 ], 00:22:18.667 "driver_specific": {} 00:22:18.667 } 00:22:18.667 ] 00:22:18.667 14:24:10 -- common/autotest_common.sh@905 -- # return 0 00:22:18.667 14:24:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:18.926 [2024-11-18 14:24:10.811947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:18.926 [2024-11-18 14:24:10.813934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:18.926 [2024-11-18 14:24:10.814005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:18.926 [2024-11-18 14:24:10.814016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:18.926 [2024-11-18 14:24:10.814040] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:18.926 [2024-11-18 14:24:10.814048] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:18.926 [2024-11-18 14:24:10.814064] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.926 14:24:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.186 14:24:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.186 "name": "Existed_Raid", 00:22:19.186 "uuid": "eb04ac7e-bd5f-4d8c-9cf2-eacd1a01d6ba", 00:22:19.186 "strip_size_kb": 64, 00:22:19.186 "state": "configuring", 00:22:19.186 "raid_level": "raid5f", 00:22:19.186 "superblock": true, 00:22:19.186 "num_base_bdevs": 4, 00:22:19.186 "num_base_bdevs_discovered": 1, 00:22:19.186 "num_base_bdevs_operational": 4, 00:22:19.186 "base_bdevs_list": [ 00:22:19.186 { 00:22:19.186 "name": "BaseBdev1", 00:22:19.186 "uuid": "90fb05b6-925c-4c3a-862f-1a1b3592aa1b", 00:22:19.186 "is_configured": true, 00:22:19.186 "data_offset": 2048, 00:22:19.186 "data_size": 63488 00:22:19.186 }, 00:22:19.186 { 00:22:19.186 "name": "BaseBdev2", 00:22:19.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.186 "is_configured": false, 00:22:19.186 "data_offset": 0, 00:22:19.186 "data_size": 0 00:22:19.186 }, 00:22:19.186 { 00:22:19.186 "name": "BaseBdev3", 00:22:19.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.186 "is_configured": false, 00:22:19.186 "data_offset": 0, 00:22:19.186 "data_size": 0 00:22:19.186 }, 00:22:19.186 { 00:22:19.186 "name": "BaseBdev4", 00:22:19.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.186 "is_configured": false, 00:22:19.186 "data_offset": 0, 00:22:19.186 "data_size": 0 00:22:19.186 } 00:22:19.186 ] 00:22:19.186 }' 00:22:19.186 14:24:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.186 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.779 14:24:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:20.085 [2024-11-18 14:24:11.868897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.085 BaseBdev2 00:22:20.085 14:24:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:20.085 14:24:11 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:20.085 14:24:11 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:20.085 14:24:11 -- common/autotest_common.sh@899 -- # local i 00:22:20.085 14:24:11 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:20.085 14:24:11 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:20.085 14:24:11 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:20.085 14:24:12 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:20.363 [ 00:22:20.363 { 00:22:20.363 "name": "BaseBdev2", 00:22:20.363 "aliases": [ 00:22:20.363 "a2003dd3-3f64-40e5-9fa2-c62d691a2130" 00:22:20.363 ], 00:22:20.363 "product_name": "Malloc disk", 00:22:20.363 "block_size": 512, 00:22:20.363 "num_blocks": 65536, 00:22:20.363 "uuid": "a2003dd3-3f64-40e5-9fa2-c62d691a2130", 00:22:20.363 "assigned_rate_limits": { 00:22:20.363 "rw_ios_per_sec": 0, 00:22:20.363 "rw_mbytes_per_sec": 0, 00:22:20.363 "r_mbytes_per_sec": 0, 00:22:20.363 "w_mbytes_per_sec": 0 00:22:20.363 }, 00:22:20.363 "claimed": true, 00:22:20.363 "claim_type": "exclusive_write", 00:22:20.363 "zoned": false, 00:22:20.363 "supported_io_types": { 00:22:20.363 "read": true, 00:22:20.363 "write": true, 00:22:20.363 "unmap": true, 00:22:20.363 "write_zeroes": true, 00:22:20.363 "flush": true, 00:22:20.363 "reset": true, 00:22:20.363 "compare": false, 00:22:20.363 "compare_and_write": false, 00:22:20.363 "abort": true, 00:22:20.363 "nvme_admin": false, 00:22:20.363 "nvme_io": false 00:22:20.363 }, 00:22:20.363 "memory_domains": [ 00:22:20.363 { 00:22:20.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.363 "dma_device_type": 2 00:22:20.363 } 00:22:20.363 ], 00:22:20.363 "driver_specific": {} 00:22:20.363 } 00:22:20.363 ] 00:22:20.363 14:24:12 -- common/autotest_common.sh@905 -- # return 0 00:22:20.363 14:24:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:20.363 14:24:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:20.363 14:24:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.364 14:24:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.622 14:24:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:20.622 "name": "Existed_Raid", 00:22:20.622 "uuid": "eb04ac7e-bd5f-4d8c-9cf2-eacd1a01d6ba", 00:22:20.622 "strip_size_kb": 64, 00:22:20.622 "state": "configuring", 00:22:20.622 "raid_level": "raid5f", 00:22:20.622 "superblock": true, 00:22:20.622 "num_base_bdevs": 4, 00:22:20.622 "num_base_bdevs_discovered": 2, 00:22:20.622 "num_base_bdevs_operational": 4, 00:22:20.622 "base_bdevs_list": [ 00:22:20.622 { 00:22:20.622 "name": "BaseBdev1", 00:22:20.622 "uuid": "90fb05b6-925c-4c3a-862f-1a1b3592aa1b", 00:22:20.622 "is_configured": true, 00:22:20.622 "data_offset": 2048, 00:22:20.622 "data_size": 63488 00:22:20.622 }, 00:22:20.622 { 00:22:20.622 "name": "BaseBdev2", 00:22:20.622 "uuid": "a2003dd3-3f64-40e5-9fa2-c62d691a2130", 00:22:20.622 "is_configured": true, 00:22:20.622 "data_offset": 2048, 00:22:20.622 "data_size": 63488 00:22:20.622 }, 00:22:20.622 { 00:22:20.622 "name": "BaseBdev3", 00:22:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.622 "is_configured": false, 00:22:20.622 "data_offset": 0, 00:22:20.622 "data_size": 0 00:22:20.622 }, 00:22:20.622 { 00:22:20.622 "name": "BaseBdev4", 00:22:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.622 "is_configured": false, 00:22:20.622 "data_offset": 0, 00:22:20.622 "data_size": 0 00:22:20.622 } 00:22:20.622 ] 00:22:20.622 }' 00:22:20.622 14:24:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:20.622 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:22:21.189 14:24:13 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:21.448 [2024-11-18 14:24:13.272556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:21.448 BaseBdev3 00:22:21.449 14:24:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:21.449 14:24:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:21.449 14:24:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:21.449 14:24:13 -- common/autotest_common.sh@899 -- # local i 00:22:21.449 14:24:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:21.449 14:24:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:21.449 14:24:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.449 14:24:13 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:21.708 [ 00:22:21.708 { 00:22:21.708 "name": "BaseBdev3", 00:22:21.708 "aliases": [ 00:22:21.708 "85b9685e-2741-4ea3-9035-26e7a4754cef" 00:22:21.708 ], 00:22:21.708 "product_name": "Malloc disk", 00:22:21.708 "block_size": 512, 00:22:21.708 "num_blocks": 65536, 00:22:21.708 "uuid": "85b9685e-2741-4ea3-9035-26e7a4754cef", 00:22:21.708 "assigned_rate_limits": { 00:22:21.708 "rw_ios_per_sec": 0, 00:22:21.708 "rw_mbytes_per_sec": 0, 00:22:21.708 "r_mbytes_per_sec": 0, 00:22:21.708 "w_mbytes_per_sec": 0 00:22:21.708 }, 00:22:21.708 "claimed": true, 00:22:21.708 "claim_type": "exclusive_write", 00:22:21.708 "zoned": false, 00:22:21.708 "supported_io_types": { 00:22:21.708 "read": true, 00:22:21.708 "write": true, 00:22:21.708 "unmap": true, 00:22:21.708 "write_zeroes": true, 00:22:21.708 "flush": true, 00:22:21.708 "reset": true, 00:22:21.708 "compare": false, 00:22:21.708 "compare_and_write": false, 00:22:21.708 "abort": true, 00:22:21.708 "nvme_admin": false, 00:22:21.708 "nvme_io": false 00:22:21.708 }, 00:22:21.708 "memory_domains": [ 00:22:21.708 { 00:22:21.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.708 "dma_device_type": 2 00:22:21.708 } 00:22:21.708 ], 00:22:21.708 "driver_specific": {} 00:22:21.708 } 00:22:21.708 ] 00:22:21.708 14:24:13 -- common/autotest_common.sh@905 -- # return 0 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:21.708 14:24:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.966 14:24:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.966 "name": "Existed_Raid", 00:22:21.966 "uuid": "eb04ac7e-bd5f-4d8c-9cf2-eacd1a01d6ba", 00:22:21.966 "strip_size_kb": 64, 00:22:21.966 "state": "configuring", 00:22:21.966 "raid_level": "raid5f", 00:22:21.966 "superblock": true, 00:22:21.966 "num_base_bdevs": 4, 00:22:21.966 "num_base_bdevs_discovered": 3, 00:22:21.966 "num_base_bdevs_operational": 4, 00:22:21.966 "base_bdevs_list": [ 00:22:21.966 { 00:22:21.966 "name": "BaseBdev1", 00:22:21.966 "uuid": "90fb05b6-925c-4c3a-862f-1a1b3592aa1b", 00:22:21.966 "is_configured": true, 00:22:21.966 "data_offset": 2048, 00:22:21.966 "data_size": 63488 00:22:21.966 }, 00:22:21.966 { 00:22:21.966 "name": "BaseBdev2", 00:22:21.966 "uuid": "a2003dd3-3f64-40e5-9fa2-c62d691a2130", 00:22:21.966 "is_configured": true, 00:22:21.966 "data_offset": 2048, 00:22:21.966 "data_size": 63488 00:22:21.966 }, 00:22:21.966 { 00:22:21.966 "name": "BaseBdev3", 00:22:21.966 "uuid": "85b9685e-2741-4ea3-9035-26e7a4754cef", 00:22:21.966 "is_configured": true, 00:22:21.966 "data_offset": 2048, 00:22:21.966 "data_size": 63488 00:22:21.966 }, 00:22:21.966 { 00:22:21.966 "name": "BaseBdev4", 00:22:21.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.966 "is_configured": false, 00:22:21.966 "data_offset": 0, 00:22:21.966 "data_size": 0 00:22:21.966 } 00:22:21.966 ] 00:22:21.966 }' 00:22:21.966 14:24:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.966 14:24:13 -- common/autotest_common.sh@10 -- # set +x 00:22:22.534 14:24:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:22.794 [2024-11-18 14:24:14.672487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:22.794 [2024-11-18 14:24:14.672751] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:22:22.794 [2024-11-18 14:24:14.672766] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:22.794 [2024-11-18 14:24:14.672943] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:22:22.794 BaseBdev4 00:22:22.794 [2024-11-18 14:24:14.673739] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:22:22.794 [2024-11-18 14:24:14.673762] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:22:22.794 [2024-11-18 14:24:14.673929] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.794 14:24:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:22:22.794 14:24:14 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:22.794 14:24:14 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:22.794 14:24:14 -- common/autotest_common.sh@899 -- # local i 00:22:22.794 14:24:14 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:22.794 14:24:14 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:22.794 14:24:14 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.054 14:24:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:23.312 [ 00:22:23.312 { 00:22:23.312 "name": "BaseBdev4", 00:22:23.312 "aliases": [ 00:22:23.312 "d6b311bf-9f30-4d34-96f1-099406251230" 00:22:23.312 ], 00:22:23.312 "product_name": "Malloc disk", 00:22:23.312 "block_size": 512, 00:22:23.312 "num_blocks": 65536, 00:22:23.312 "uuid": "d6b311bf-9f30-4d34-96f1-099406251230", 00:22:23.312 "assigned_rate_limits": { 00:22:23.312 "rw_ios_per_sec": 0, 00:22:23.312 "rw_mbytes_per_sec": 0, 00:22:23.312 "r_mbytes_per_sec": 0, 00:22:23.312 "w_mbytes_per_sec": 0 00:22:23.312 }, 00:22:23.312 "claimed": true, 00:22:23.312 "claim_type": "exclusive_write", 00:22:23.312 "zoned": false, 00:22:23.312 "supported_io_types": { 00:22:23.312 "read": true, 00:22:23.312 "write": true, 00:22:23.312 "unmap": true, 00:22:23.312 "write_zeroes": true, 00:22:23.312 "flush": true, 00:22:23.312 "reset": true, 00:22:23.312 "compare": false, 00:22:23.312 "compare_and_write": false, 00:22:23.312 "abort": true, 00:22:23.312 "nvme_admin": false, 00:22:23.312 "nvme_io": false 00:22:23.312 }, 00:22:23.312 "memory_domains": [ 00:22:23.312 { 00:22:23.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.312 "dma_device_type": 2 00:22:23.312 } 00:22:23.312 ], 00:22:23.312 "driver_specific": {} 00:22:23.312 } 00:22:23.312 ] 00:22:23.312 14:24:15 -- common/autotest_common.sh@905 -- # return 0 00:22:23.312 14:24:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:23.312 14:24:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:23.312 14:24:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:23.312 14:24:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:23.312 14:24:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.313 14:24:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.571 14:24:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.571 "name": "Existed_Raid", 00:22:23.571 "uuid": "eb04ac7e-bd5f-4d8c-9cf2-eacd1a01d6ba", 00:22:23.571 "strip_size_kb": 64, 00:22:23.571 "state": "online", 00:22:23.571 "raid_level": "raid5f", 00:22:23.571 "superblock": true, 00:22:23.571 "num_base_bdevs": 4, 00:22:23.571 "num_base_bdevs_discovered": 4, 00:22:23.571 "num_base_bdevs_operational": 4, 00:22:23.571 "base_bdevs_list": [ 00:22:23.571 { 00:22:23.571 "name": "BaseBdev1", 00:22:23.571 "uuid": "90fb05b6-925c-4c3a-862f-1a1b3592aa1b", 00:22:23.571 "is_configured": true, 00:22:23.571 "data_offset": 2048, 00:22:23.571 "data_size": 63488 00:22:23.571 }, 00:22:23.571 { 00:22:23.571 "name": "BaseBdev2", 00:22:23.571 "uuid": "a2003dd3-3f64-40e5-9fa2-c62d691a2130", 00:22:23.571 "is_configured": true, 00:22:23.571 "data_offset": 2048, 00:22:23.571 "data_size": 63488 00:22:23.571 }, 00:22:23.571 { 00:22:23.571 "name": "BaseBdev3", 00:22:23.571 "uuid": "85b9685e-2741-4ea3-9035-26e7a4754cef", 00:22:23.571 "is_configured": true, 00:22:23.571 "data_offset": 2048, 00:22:23.571 "data_size": 63488 00:22:23.571 }, 00:22:23.571 { 00:22:23.571 "name": "BaseBdev4", 00:22:23.571 "uuid": "d6b311bf-9f30-4d34-96f1-099406251230", 00:22:23.571 "is_configured": true, 00:22:23.571 "data_offset": 2048, 00:22:23.571 "data_size": 63488 00:22:23.571 } 00:22:23.571 ] 00:22:23.572 }' 00:22:23.572 14:24:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.572 14:24:15 -- common/autotest_common.sh@10 -- # set +x 00:22:24.139 14:24:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:24.139 [2024-11-18 14:24:16.192811] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.398 14:24:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.657 14:24:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.657 "name": "Existed_Raid", 00:22:24.657 "uuid": "eb04ac7e-bd5f-4d8c-9cf2-eacd1a01d6ba", 00:22:24.657 "strip_size_kb": 64, 00:22:24.657 "state": "online", 00:22:24.657 "raid_level": "raid5f", 00:22:24.657 "superblock": true, 00:22:24.657 "num_base_bdevs": 4, 00:22:24.657 "num_base_bdevs_discovered": 3, 00:22:24.657 "num_base_bdevs_operational": 3, 00:22:24.657 "base_bdevs_list": [ 00:22:24.657 { 00:22:24.657 "name": null, 00:22:24.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.657 "is_configured": false, 00:22:24.657 "data_offset": 2048, 00:22:24.657 "data_size": 63488 00:22:24.657 }, 00:22:24.657 { 00:22:24.657 "name": "BaseBdev2", 00:22:24.657 "uuid": "a2003dd3-3f64-40e5-9fa2-c62d691a2130", 00:22:24.657 "is_configured": true, 00:22:24.657 "data_offset": 2048, 00:22:24.657 "data_size": 63488 00:22:24.657 }, 00:22:24.657 { 00:22:24.657 "name": "BaseBdev3", 00:22:24.657 "uuid": "85b9685e-2741-4ea3-9035-26e7a4754cef", 00:22:24.657 "is_configured": true, 00:22:24.657 "data_offset": 2048, 00:22:24.657 "data_size": 63488 00:22:24.657 }, 00:22:24.657 { 00:22:24.657 "name": "BaseBdev4", 00:22:24.657 "uuid": "d6b311bf-9f30-4d34-96f1-099406251230", 00:22:24.658 "is_configured": true, 00:22:24.658 "data_offset": 2048, 00:22:24.658 "data_size": 63488 00:22:24.658 } 00:22:24.658 ] 00:22:24.658 }' 00:22:24.658 14:24:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.658 14:24:16 -- common/autotest_common.sh@10 -- # set +x 00:22:25.225 14:24:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:25.225 14:24:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:25.225 14:24:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.225 14:24:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:25.484 14:24:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:25.484 14:24:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:25.484 14:24:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:25.484 [2024-11-18 14:24:17.556694] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:25.484 [2024-11-18 14:24:17.556726] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:25.484 [2024-11-18 14:24:17.556808] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.742 14:24:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:25.742 14:24:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:25.742 14:24:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.742 14:24:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:26.001 14:24:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:26.001 14:24:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:26.001 14:24:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:26.001 [2024-11-18 14:24:18.045364] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:26.001 14:24:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:26.001 14:24:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:26.001 14:24:18 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.001 14:24:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:26.260 14:24:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:26.260 14:24:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:26.260 14:24:18 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:26.518 [2024-11-18 14:24:18.433466] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:26.518 [2024-11-18 14:24:18.433514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:22:26.518 14:24:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:26.518 14:24:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:26.518 14:24:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.518 14:24:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:26.777 14:24:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:26.777 14:24:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:26.777 14:24:18 -- bdev/bdev_raid.sh@287 -- # killprocess 139700 00:22:26.777 14:24:18 -- common/autotest_common.sh@936 -- # '[' -z 139700 ']' 00:22:26.777 14:24:18 -- common/autotest_common.sh@940 -- # kill -0 139700 00:22:26.777 14:24:18 -- common/autotest_common.sh@941 -- # uname 00:22:26.777 14:24:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.777 14:24:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139700 00:22:26.777 14:24:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:26.777 14:24:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:26.777 14:24:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139700' 00:22:26.777 killing process with pid 139700 00:22:26.777 14:24:18 -- common/autotest_common.sh@955 -- # kill 139700 00:22:26.777 14:24:18 -- common/autotest_common.sh@960 -- # wait 139700 00:22:26.777 [2024-11-18 14:24:18.723690] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:26.777 [2024-11-18 14:24:18.723774] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.036 ************************************ 00:22:27.036 END TEST raid5f_state_function_test_sb 00:22:27.036 ************************************ 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:27.036 00:22:27.036 real 0m13.334s 00:22:27.036 user 0m24.574s 00:22:27.036 sys 0m1.612s 00:22:27.036 14:24:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:27.036 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:22:27.036 14:24:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:27.036 14:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:27.036 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.036 ************************************ 00:22:27.036 START TEST raid5f_superblock_test 00:22:27.036 ************************************ 00:22:27.036 14:24:19 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=140136 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 140136 /var/tmp/spdk-raid.sock 00:22:27.036 14:24:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:27.036 14:24:19 -- common/autotest_common.sh@829 -- # '[' -z 140136 ']' 00:22:27.036 14:24:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.036 14:24:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.036 14:24:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.036 14:24:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.036 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.296 [2024-11-18 14:24:19.116746] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:27.296 [2024-11-18 14:24:19.116948] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140136 ] 00:22:27.296 [2024-11-18 14:24:19.253515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.296 [2024-11-18 14:24:19.318426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.554 [2024-11-18 14:24:19.387526] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.121 14:24:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.121 14:24:20 -- common/autotest_common.sh@862 -- # return 0 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:28.121 14:24:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:28.380 malloc1 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:28.380 [2024-11-18 14:24:20.430799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:28.380 [2024-11-18 14:24:20.430903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.380 [2024-11-18 14:24:20.430946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:28.380 [2024-11-18 14:24:20.430994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.380 [2024-11-18 14:24:20.433380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.380 [2024-11-18 14:24:20.433436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:28.380 pt1 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:28.380 14:24:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:28.639 malloc2 00:22:28.639 14:24:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:28.898 [2024-11-18 14:24:20.920110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:28.898 [2024-11-18 14:24:20.920173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.898 [2024-11-18 14:24:20.920209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:28.898 [2024-11-18 14:24:20.920253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.898 [2024-11-18 14:24:20.922438] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.898 [2024-11-18 14:24:20.922486] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:28.898 pt2 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:28.898 14:24:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:29.157 malloc3 00:22:29.157 14:24:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:29.415 [2024-11-18 14:24:21.335827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:29.415 [2024-11-18 14:24:21.335891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.415 [2024-11-18 14:24:21.335926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:29.415 [2024-11-18 14:24:21.335968] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.415 [2024-11-18 14:24:21.338145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.415 [2024-11-18 14:24:21.338206] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:29.415 pt3 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:29.415 14:24:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:29.674 malloc4 00:22:29.674 14:24:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:29.674 [2024-11-18 14:24:21.713130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:29.674 [2024-11-18 14:24:21.713198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.674 [2024-11-18 14:24:21.713228] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:29.674 [2024-11-18 14:24:21.713269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.674 [2024-11-18 14:24:21.715497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.674 [2024-11-18 14:24:21.715548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:29.674 pt4 00:22:29.674 14:24:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:29.674 14:24:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:29.674 14:24:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:29.933 [2024-11-18 14:24:21.901274] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:29.933 [2024-11-18 14:24:21.903263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:29.933 [2024-11-18 14:24:21.903330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:29.933 [2024-11-18 14:24:21.903376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:29.933 [2024-11-18 14:24:21.903599] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:22:29.933 [2024-11-18 14:24:21.903613] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:29.933 [2024-11-18 14:24:21.903728] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:29.933 [2024-11-18 14:24:21.904481] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:22:29.933 [2024-11-18 14:24:21.904502] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:22:29.933 [2024-11-18 14:24:21.904642] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.933 14:24:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.192 14:24:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.192 "name": "raid_bdev1", 00:22:30.192 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:30.192 "strip_size_kb": 64, 00:22:30.192 "state": "online", 00:22:30.192 "raid_level": "raid5f", 00:22:30.192 "superblock": true, 00:22:30.192 "num_base_bdevs": 4, 00:22:30.192 "num_base_bdevs_discovered": 4, 00:22:30.192 "num_base_bdevs_operational": 4, 00:22:30.192 "base_bdevs_list": [ 00:22:30.192 { 00:22:30.192 "name": "pt1", 00:22:30.192 "uuid": "2d1a884a-119c-5305-8e81-85cc46969f2c", 00:22:30.192 "is_configured": true, 00:22:30.192 "data_offset": 2048, 00:22:30.192 "data_size": 63488 00:22:30.192 }, 00:22:30.192 { 00:22:30.192 "name": "pt2", 00:22:30.192 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:30.192 "is_configured": true, 00:22:30.192 "data_offset": 2048, 00:22:30.192 "data_size": 63488 00:22:30.192 }, 00:22:30.192 { 00:22:30.192 "name": "pt3", 00:22:30.192 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:30.192 "is_configured": true, 00:22:30.192 "data_offset": 2048, 00:22:30.192 "data_size": 63488 00:22:30.192 }, 00:22:30.192 { 00:22:30.192 "name": "pt4", 00:22:30.192 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:30.192 "is_configured": true, 00:22:30.192 "data_offset": 2048, 00:22:30.192 "data_size": 63488 00:22:30.192 } 00:22:30.192 ] 00:22:30.192 }' 00:22:30.192 14:24:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.192 14:24:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.759 14:24:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:30.759 14:24:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:31.017 [2024-11-18 14:24:22.859406] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:31.017 14:24:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0c30d40e-3efc-4598-8bda-92da058480ac 00:22:31.017 14:24:22 -- bdev/bdev_raid.sh@380 -- # '[' -z 0c30d40e-3efc-4598-8bda-92da058480ac ']' 00:22:31.017 14:24:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:31.017 [2024-11-18 14:24:23.051307] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.017 [2024-11-18 14:24:23.051328] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:31.017 [2024-11-18 14:24:23.051412] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.017 [2024-11-18 14:24:23.051490] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.017 [2024-11-18 14:24:23.051501] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:22:31.017 14:24:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:31.017 14:24:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.276 14:24:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:31.276 14:24:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:31.276 14:24:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:31.276 14:24:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:31.534 14:24:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:31.534 14:24:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:31.793 14:24:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:31.793 14:24:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:32.051 14:24:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:32.051 14:24:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:32.051 14:24:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:32.051 14:24:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:32.310 14:24:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:32.310 14:24:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:32.310 14:24:24 -- common/autotest_common.sh@650 -- # local es=0 00:22:32.310 14:24:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:32.310 14:24:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:32.310 14:24:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.310 14:24:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:32.310 14:24:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.310 14:24:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:32.310 14:24:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.310 14:24:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:32.310 14:24:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:32.310 14:24:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:32.569 [2024-11-18 14:24:24.467496] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:32.569 [2024-11-18 14:24:24.469615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:32.569 [2024-11-18 14:24:24.469782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:32.569 [2024-11-18 14:24:24.469853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:32.569 [2024-11-18 14:24:24.470026] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:32.569 [2024-11-18 14:24:24.470243] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:32.569 [2024-11-18 14:24:24.470383] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:32.569 [2024-11-18 14:24:24.470548] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:22:32.569 [2024-11-18 14:24:24.470715] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:32.569 [2024-11-18 14:24:24.470822] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:22:32.569 request: 00:22:32.569 { 00:22:32.569 "name": "raid_bdev1", 00:22:32.569 "raid_level": "raid5f", 00:22:32.569 "base_bdevs": [ 00:22:32.569 "malloc1", 00:22:32.569 "malloc2", 00:22:32.569 "malloc3", 00:22:32.569 "malloc4" 00:22:32.569 ], 00:22:32.569 "superblock": false, 00:22:32.569 "strip_size_kb": 64, 00:22:32.569 "method": "bdev_raid_create", 00:22:32.569 "req_id": 1 00:22:32.569 } 00:22:32.569 Got JSON-RPC error response 00:22:32.569 response: 00:22:32.569 { 00:22:32.569 "code": -17, 00:22:32.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:32.569 } 00:22:32.569 14:24:24 -- common/autotest_common.sh@653 -- # es=1 00:22:32.569 14:24:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:32.569 14:24:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:32.569 14:24:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:32.569 14:24:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.569 14:24:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:32.828 [2024-11-18 14:24:24.839505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:32.828 [2024-11-18 14:24:24.839669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.828 [2024-11-18 14:24:24.839736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:32.828 [2024-11-18 14:24:24.839855] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.828 [2024-11-18 14:24:24.842066] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.828 [2024-11-18 14:24:24.842225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:32.828 [2024-11-18 14:24:24.842403] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:32.828 [2024-11-18 14:24:24.842573] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:32.828 pt1 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.828 14:24:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.086 14:24:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.086 "name": "raid_bdev1", 00:22:33.086 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:33.086 "strip_size_kb": 64, 00:22:33.086 "state": "configuring", 00:22:33.086 "raid_level": "raid5f", 00:22:33.086 "superblock": true, 00:22:33.086 "num_base_bdevs": 4, 00:22:33.086 "num_base_bdevs_discovered": 1, 00:22:33.086 "num_base_bdevs_operational": 4, 00:22:33.086 "base_bdevs_list": [ 00:22:33.086 { 00:22:33.086 "name": "pt1", 00:22:33.086 "uuid": "2d1a884a-119c-5305-8e81-85cc46969f2c", 00:22:33.086 "is_configured": true, 00:22:33.086 "data_offset": 2048, 00:22:33.086 "data_size": 63488 00:22:33.086 }, 00:22:33.086 { 00:22:33.086 "name": null, 00:22:33.086 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:33.086 "is_configured": false, 00:22:33.086 "data_offset": 2048, 00:22:33.086 "data_size": 63488 00:22:33.086 }, 00:22:33.086 { 00:22:33.086 "name": null, 00:22:33.086 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:33.086 "is_configured": false, 00:22:33.086 "data_offset": 2048, 00:22:33.086 "data_size": 63488 00:22:33.086 }, 00:22:33.086 { 00:22:33.086 "name": null, 00:22:33.086 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:33.086 "is_configured": false, 00:22:33.086 "data_offset": 2048, 00:22:33.086 "data_size": 63488 00:22:33.086 } 00:22:33.086 ] 00:22:33.086 }' 00:22:33.086 14:24:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.086 14:24:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.653 14:24:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:22:33.653 14:24:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:33.912 [2024-11-18 14:24:25.811686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:33.912 [2024-11-18 14:24:25.811873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.912 [2024-11-18 14:24:25.811945] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:33.912 [2024-11-18 14:24:25.812212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.912 [2024-11-18 14:24:25.812604] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.912 [2024-11-18 14:24:25.812767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:33.912 [2024-11-18 14:24:25.812931] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:33.912 [2024-11-18 14:24:25.813043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:33.912 pt2 00:22:33.912 14:24:25 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:34.171 [2024-11-18 14:24:25.999712] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.171 14:24:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.430 14:24:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.430 "name": "raid_bdev1", 00:22:34.430 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:34.430 "strip_size_kb": 64, 00:22:34.430 "state": "configuring", 00:22:34.430 "raid_level": "raid5f", 00:22:34.430 "superblock": true, 00:22:34.430 "num_base_bdevs": 4, 00:22:34.430 "num_base_bdevs_discovered": 1, 00:22:34.430 "num_base_bdevs_operational": 4, 00:22:34.430 "base_bdevs_list": [ 00:22:34.430 { 00:22:34.430 "name": "pt1", 00:22:34.430 "uuid": "2d1a884a-119c-5305-8e81-85cc46969f2c", 00:22:34.430 "is_configured": true, 00:22:34.430 "data_offset": 2048, 00:22:34.430 "data_size": 63488 00:22:34.430 }, 00:22:34.430 { 00:22:34.430 "name": null, 00:22:34.430 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:34.430 "is_configured": false, 00:22:34.430 "data_offset": 2048, 00:22:34.430 "data_size": 63488 00:22:34.430 }, 00:22:34.430 { 00:22:34.430 "name": null, 00:22:34.430 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:34.430 "is_configured": false, 00:22:34.430 "data_offset": 2048, 00:22:34.430 "data_size": 63488 00:22:34.430 }, 00:22:34.430 { 00:22:34.430 "name": null, 00:22:34.430 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:34.430 "is_configured": false, 00:22:34.430 "data_offset": 2048, 00:22:34.430 "data_size": 63488 00:22:34.430 } 00:22:34.430 ] 00:22:34.430 }' 00:22:34.430 14:24:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.430 14:24:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.997 14:24:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:34.997 14:24:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:34.997 14:24:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:35.256 [2024-11-18 14:24:27.147910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:35.256 [2024-11-18 14:24:27.148098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.256 [2024-11-18 14:24:27.148167] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:35.256 [2024-11-18 14:24:27.148441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.256 [2024-11-18 14:24:27.148806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.256 [2024-11-18 14:24:27.148969] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:35.256 [2024-11-18 14:24:27.149139] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:35.256 [2024-11-18 14:24:27.149247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:35.256 pt2 00:22:35.256 14:24:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:35.256 14:24:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:35.256 14:24:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:35.569 [2024-11-18 14:24:27.395967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:35.569 [2024-11-18 14:24:27.396155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.569 [2024-11-18 14:24:27.396222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:35.569 [2024-11-18 14:24:27.396486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.569 [2024-11-18 14:24:27.396869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.569 [2024-11-18 14:24:27.397045] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:35.569 [2024-11-18 14:24:27.397222] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:35.569 [2024-11-18 14:24:27.397338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:35.569 pt3 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:35.569 [2024-11-18 14:24:27.587990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:35.569 [2024-11-18 14:24:27.588174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.569 [2024-11-18 14:24:27.588238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:35.569 [2024-11-18 14:24:27.588523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.569 [2024-11-18 14:24:27.588883] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.569 [2024-11-18 14:24:27.589056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:35.569 [2024-11-18 14:24:27.589214] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:35.569 [2024-11-18 14:24:27.589323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:35.569 [2024-11-18 14:24:27.589481] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:35.569 [2024-11-18 14:24:27.589564] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:35.569 [2024-11-18 14:24:27.589802] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:35.569 [2024-11-18 14:24:27.590566] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:35.569 [2024-11-18 14:24:27.590687] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:35.569 [2024-11-18 14:24:27.590892] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.569 pt4 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.569 14:24:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.828 14:24:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.828 "name": "raid_bdev1", 00:22:35.828 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:35.828 "strip_size_kb": 64, 00:22:35.828 "state": "online", 00:22:35.828 "raid_level": "raid5f", 00:22:35.828 "superblock": true, 00:22:35.828 "num_base_bdevs": 4, 00:22:35.828 "num_base_bdevs_discovered": 4, 00:22:35.828 "num_base_bdevs_operational": 4, 00:22:35.828 "base_bdevs_list": [ 00:22:35.828 { 00:22:35.828 "name": "pt1", 00:22:35.828 "uuid": "2d1a884a-119c-5305-8e81-85cc46969f2c", 00:22:35.828 "is_configured": true, 00:22:35.828 "data_offset": 2048, 00:22:35.828 "data_size": 63488 00:22:35.828 }, 00:22:35.828 { 00:22:35.828 "name": "pt2", 00:22:35.828 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:35.828 "is_configured": true, 00:22:35.828 "data_offset": 2048, 00:22:35.828 "data_size": 63488 00:22:35.828 }, 00:22:35.828 { 00:22:35.828 "name": "pt3", 00:22:35.828 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:35.828 "is_configured": true, 00:22:35.828 "data_offset": 2048, 00:22:35.828 "data_size": 63488 00:22:35.828 }, 00:22:35.828 { 00:22:35.828 "name": "pt4", 00:22:35.828 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:35.828 "is_configured": true, 00:22:35.828 "data_offset": 2048, 00:22:35.828 "data_size": 63488 00:22:35.828 } 00:22:35.828 ] 00:22:35.828 }' 00:22:35.828 14:24:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.828 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:22:36.396 14:24:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:36.396 14:24:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:36.654 [2024-11-18 14:24:28.684903] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:36.654 14:24:28 -- bdev/bdev_raid.sh@430 -- # '[' 0c30d40e-3efc-4598-8bda-92da058480ac '!=' 0c30d40e-3efc-4598-8bda-92da058480ac ']' 00:22:36.654 14:24:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:22:36.654 14:24:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:36.654 14:24:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:36.654 14:24:28 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:36.912 [2024-11-18 14:24:28.880837] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.912 14:24:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.913 14:24:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.913 14:24:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.913 14:24:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.172 14:24:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.172 "name": "raid_bdev1", 00:22:37.172 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:37.172 "strip_size_kb": 64, 00:22:37.172 "state": "online", 00:22:37.172 "raid_level": "raid5f", 00:22:37.172 "superblock": true, 00:22:37.172 "num_base_bdevs": 4, 00:22:37.172 "num_base_bdevs_discovered": 3, 00:22:37.172 "num_base_bdevs_operational": 3, 00:22:37.172 "base_bdevs_list": [ 00:22:37.172 { 00:22:37.172 "name": null, 00:22:37.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.172 "is_configured": false, 00:22:37.172 "data_offset": 2048, 00:22:37.172 "data_size": 63488 00:22:37.172 }, 00:22:37.172 { 00:22:37.172 "name": "pt2", 00:22:37.172 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:37.172 "is_configured": true, 00:22:37.172 "data_offset": 2048, 00:22:37.172 "data_size": 63488 00:22:37.172 }, 00:22:37.172 { 00:22:37.172 "name": "pt3", 00:22:37.172 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:37.172 "is_configured": true, 00:22:37.172 "data_offset": 2048, 00:22:37.172 "data_size": 63488 00:22:37.172 }, 00:22:37.172 { 00:22:37.172 "name": "pt4", 00:22:37.172 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:37.172 "is_configured": true, 00:22:37.172 "data_offset": 2048, 00:22:37.172 "data_size": 63488 00:22:37.172 } 00:22:37.172 ] 00:22:37.172 }' 00:22:37.172 14:24:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.172 14:24:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.737 14:24:29 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:37.995 [2024-11-18 14:24:29.993019] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:37.995 [2024-11-18 14:24:29.993160] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.995 [2024-11-18 14:24:29.993319] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.995 [2024-11-18 14:24:29.993483] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:37.995 [2024-11-18 14:24:29.993582] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:37.995 14:24:30 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.995 14:24:30 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:38.253 14:24:30 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:38.253 14:24:30 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:38.253 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:38.253 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:38.253 14:24:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:38.512 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:38.512 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:38.512 14:24:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:38.771 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:38.771 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:38.771 14:24:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:39.030 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:39.030 14:24:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:39.030 14:24:30 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:39.030 14:24:30 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:39.030 14:24:30 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:39.289 [2024-11-18 14:24:31.106500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:39.289 [2024-11-18 14:24:31.106692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.289 [2024-11-18 14:24:31.106762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:39.289 [2024-11-18 14:24:31.107073] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.289 [2024-11-18 14:24:31.109008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.289 [2024-11-18 14:24:31.109187] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:39.289 [2024-11-18 14:24:31.109350] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:39.289 [2024-11-18 14:24:31.109488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:39.289 pt2 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.289 "name": "raid_bdev1", 00:22:39.289 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:39.289 "strip_size_kb": 64, 00:22:39.289 "state": "configuring", 00:22:39.289 "raid_level": "raid5f", 00:22:39.289 "superblock": true, 00:22:39.289 "num_base_bdevs": 4, 00:22:39.289 "num_base_bdevs_discovered": 1, 00:22:39.289 "num_base_bdevs_operational": 3, 00:22:39.289 "base_bdevs_list": [ 00:22:39.289 { 00:22:39.289 "name": null, 00:22:39.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.289 "is_configured": false, 00:22:39.289 "data_offset": 2048, 00:22:39.289 "data_size": 63488 00:22:39.289 }, 00:22:39.289 { 00:22:39.289 "name": "pt2", 00:22:39.289 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:39.289 "is_configured": true, 00:22:39.289 "data_offset": 2048, 00:22:39.289 "data_size": 63488 00:22:39.289 }, 00:22:39.289 { 00:22:39.289 "name": null, 00:22:39.289 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:39.289 "is_configured": false, 00:22:39.289 "data_offset": 2048, 00:22:39.289 "data_size": 63488 00:22:39.289 }, 00:22:39.289 { 00:22:39.289 "name": null, 00:22:39.289 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:39.289 "is_configured": false, 00:22:39.289 "data_offset": 2048, 00:22:39.289 "data_size": 63488 00:22:39.289 } 00:22:39.289 ] 00:22:39.289 }' 00:22:39.289 14:24:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.289 14:24:31 -- common/autotest_common.sh@10 -- # set +x 00:22:40.223 14:24:31 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:40.223 14:24:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:40.223 14:24:31 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:40.223 [2024-11-18 14:24:32.190701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:40.223 [2024-11-18 14:24:32.190893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:40.223 [2024-11-18 14:24:32.190962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:40.223 [2024-11-18 14:24:32.191215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:40.223 [2024-11-18 14:24:32.191560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:40.223 [2024-11-18 14:24:32.191722] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:40.223 [2024-11-18 14:24:32.191883] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:40.223 [2024-11-18 14:24:32.192035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:40.223 pt3 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.223 14:24:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.224 14:24:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.224 14:24:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.224 14:24:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.224 14:24:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.482 14:24:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.482 "name": "raid_bdev1", 00:22:40.482 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:40.482 "strip_size_kb": 64, 00:22:40.482 "state": "configuring", 00:22:40.482 "raid_level": "raid5f", 00:22:40.482 "superblock": true, 00:22:40.482 "num_base_bdevs": 4, 00:22:40.482 "num_base_bdevs_discovered": 2, 00:22:40.482 "num_base_bdevs_operational": 3, 00:22:40.482 "base_bdevs_list": [ 00:22:40.482 { 00:22:40.482 "name": null, 00:22:40.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.483 "is_configured": false, 00:22:40.483 "data_offset": 2048, 00:22:40.483 "data_size": 63488 00:22:40.483 }, 00:22:40.483 { 00:22:40.483 "name": "pt2", 00:22:40.483 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:40.483 "is_configured": true, 00:22:40.483 "data_offset": 2048, 00:22:40.483 "data_size": 63488 00:22:40.483 }, 00:22:40.483 { 00:22:40.483 "name": "pt3", 00:22:40.483 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:40.483 "is_configured": true, 00:22:40.483 "data_offset": 2048, 00:22:40.483 "data_size": 63488 00:22:40.483 }, 00:22:40.483 { 00:22:40.483 "name": null, 00:22:40.483 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:40.483 "is_configured": false, 00:22:40.483 "data_offset": 2048, 00:22:40.483 "data_size": 63488 00:22:40.483 } 00:22:40.483 ] 00:22:40.483 }' 00:22:40.483 14:24:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.483 14:24:32 -- common/autotest_common.sh@10 -- # set +x 00:22:41.050 14:24:33 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:41.050 14:24:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:41.050 14:24:33 -- bdev/bdev_raid.sh@462 -- # i=3 00:22:41.050 14:24:33 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:41.309 [2024-11-18 14:24:33.290972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:41.309 [2024-11-18 14:24:33.291212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.309 [2024-11-18 14:24:33.291432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:41.309 [2024-11-18 14:24:33.291583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.309 [2024-11-18 14:24:33.292152] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.309 [2024-11-18 14:24:33.292323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:41.309 [2024-11-18 14:24:33.292508] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:41.309 [2024-11-18 14:24:33.292639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:41.309 [2024-11-18 14:24:33.292822] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:22:41.309 [2024-11-18 14:24:33.292942] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:41.309 [2024-11-18 14:24:33.293136] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:22:41.309 [2024-11-18 14:24:33.294007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:22:41.309 [2024-11-18 14:24:33.294143] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:22:41.309 [2024-11-18 14:24:33.294531] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.309 pt4 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.309 14:24:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.568 14:24:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:41.568 "name": "raid_bdev1", 00:22:41.568 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:41.568 "strip_size_kb": 64, 00:22:41.568 "state": "online", 00:22:41.568 "raid_level": "raid5f", 00:22:41.568 "superblock": true, 00:22:41.568 "num_base_bdevs": 4, 00:22:41.568 "num_base_bdevs_discovered": 3, 00:22:41.568 "num_base_bdevs_operational": 3, 00:22:41.568 "base_bdevs_list": [ 00:22:41.568 { 00:22:41.568 "name": null, 00:22:41.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.568 "is_configured": false, 00:22:41.568 "data_offset": 2048, 00:22:41.568 "data_size": 63488 00:22:41.568 }, 00:22:41.568 { 00:22:41.568 "name": "pt2", 00:22:41.568 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:41.568 "is_configured": true, 00:22:41.568 "data_offset": 2048, 00:22:41.568 "data_size": 63488 00:22:41.568 }, 00:22:41.568 { 00:22:41.568 "name": "pt3", 00:22:41.568 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:41.568 "is_configured": true, 00:22:41.568 "data_offset": 2048, 00:22:41.568 "data_size": 63488 00:22:41.568 }, 00:22:41.568 { 00:22:41.568 "name": "pt4", 00:22:41.568 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:41.568 "is_configured": true, 00:22:41.568 "data_offset": 2048, 00:22:41.568 "data_size": 63488 00:22:41.568 } 00:22:41.568 ] 00:22:41.568 }' 00:22:41.568 14:24:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:41.568 14:24:33 -- common/autotest_common.sh@10 -- # set +x 00:22:42.135 14:24:34 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:22:42.135 14:24:34 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:42.394 [2024-11-18 14:24:34.276843] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:42.394 [2024-11-18 14:24:34.276995] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:42.394 [2024-11-18 14:24:34.277184] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:42.394 [2024-11-18 14:24:34.277358] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:42.394 [2024-11-18 14:24:34.277472] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:22:42.394 14:24:34 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.394 14:24:34 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:42.653 14:24:34 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:42.653 14:24:34 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:42.653 14:24:34 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:42.653 [2024-11-18 14:24:34.712870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:42.653 [2024-11-18 14:24:34.713081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.653 [2024-11-18 14:24:34.713165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:42.653 [2024-11-18 14:24:34.713397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.653 [2024-11-18 14:24:34.715747] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.653 [2024-11-18 14:24:34.715950] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:42.653 [2024-11-18 14:24:34.716153] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:42.653 [2024-11-18 14:24:34.716305] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:42.653 pt1 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.912 "name": "raid_bdev1", 00:22:42.912 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:42.912 "strip_size_kb": 64, 00:22:42.912 "state": "configuring", 00:22:42.912 "raid_level": "raid5f", 00:22:42.912 "superblock": true, 00:22:42.912 "num_base_bdevs": 4, 00:22:42.912 "num_base_bdevs_discovered": 1, 00:22:42.912 "num_base_bdevs_operational": 4, 00:22:42.912 "base_bdevs_list": [ 00:22:42.912 { 00:22:42.912 "name": "pt1", 00:22:42.912 "uuid": "2d1a884a-119c-5305-8e81-85cc46969f2c", 00:22:42.912 "is_configured": true, 00:22:42.912 "data_offset": 2048, 00:22:42.912 "data_size": 63488 00:22:42.912 }, 00:22:42.912 { 00:22:42.912 "name": null, 00:22:42.912 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:42.912 "is_configured": false, 00:22:42.912 "data_offset": 2048, 00:22:42.912 "data_size": 63488 00:22:42.912 }, 00:22:42.912 { 00:22:42.912 "name": null, 00:22:42.912 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:42.912 "is_configured": false, 00:22:42.912 "data_offset": 2048, 00:22:42.912 "data_size": 63488 00:22:42.912 }, 00:22:42.912 { 00:22:42.912 "name": null, 00:22:42.912 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:42.912 "is_configured": false, 00:22:42.912 "data_offset": 2048, 00:22:42.912 "data_size": 63488 00:22:42.912 } 00:22:42.912 ] 00:22:42.912 }' 00:22:42.912 14:24:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.912 14:24:34 -- common/autotest_common.sh@10 -- # set +x 00:22:43.848 14:24:35 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:43.848 14:24:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:43.848 14:24:35 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:43.848 14:24:35 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:43.848 14:24:35 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:43.848 14:24:35 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:44.107 14:24:36 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:44.107 14:24:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:44.107 14:24:36 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:44.367 14:24:36 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:44.367 14:24:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:44.367 14:24:36 -- bdev/bdev_raid.sh@489 -- # i=3 00:22:44.367 14:24:36 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:44.626 [2024-11-18 14:24:36.521249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:44.626 [2024-11-18 14:24:36.521473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.626 [2024-11-18 14:24:36.521619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:44.626 [2024-11-18 14:24:36.521743] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.626 [2024-11-18 14:24:36.522305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.626 [2024-11-18 14:24:36.522489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:44.626 [2024-11-18 14:24:36.522701] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:22:44.626 [2024-11-18 14:24:36.522815] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:44.626 [2024-11-18 14:24:36.522907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.626 [2024-11-18 14:24:36.522974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:22:44.626 [2024-11-18 14:24:36.523278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:44.626 pt4 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.626 14:24:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.885 14:24:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.885 "name": "raid_bdev1", 00:22:44.885 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:44.885 "strip_size_kb": 64, 00:22:44.885 "state": "configuring", 00:22:44.885 "raid_level": "raid5f", 00:22:44.885 "superblock": true, 00:22:44.885 "num_base_bdevs": 4, 00:22:44.885 "num_base_bdevs_discovered": 1, 00:22:44.885 "num_base_bdevs_operational": 3, 00:22:44.885 "base_bdevs_list": [ 00:22:44.885 { 00:22:44.885 "name": null, 00:22:44.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.885 "is_configured": false, 00:22:44.885 "data_offset": 2048, 00:22:44.885 "data_size": 63488 00:22:44.885 }, 00:22:44.885 { 00:22:44.885 "name": null, 00:22:44.885 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:44.885 "is_configured": false, 00:22:44.885 "data_offset": 2048, 00:22:44.885 "data_size": 63488 00:22:44.885 }, 00:22:44.885 { 00:22:44.885 "name": null, 00:22:44.885 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:44.885 "is_configured": false, 00:22:44.885 "data_offset": 2048, 00:22:44.885 "data_size": 63488 00:22:44.885 }, 00:22:44.885 { 00:22:44.885 "name": "pt4", 00:22:44.885 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:44.885 "is_configured": true, 00:22:44.885 "data_offset": 2048, 00:22:44.885 "data_size": 63488 00:22:44.885 } 00:22:44.885 ] 00:22:44.885 }' 00:22:44.885 14:24:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.885 14:24:36 -- common/autotest_common.sh@10 -- # set +x 00:22:45.451 14:24:37 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:45.451 14:24:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:45.451 14:24:37 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:45.708 [2024-11-18 14:24:37.571663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:45.708 [2024-11-18 14:24:37.571889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.708 [2024-11-18 14:24:37.572029] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:45.708 [2024-11-18 14:24:37.572168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.708 [2024-11-18 14:24:37.572740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.708 [2024-11-18 14:24:37.572918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:45.708 [2024-11-18 14:24:37.573114] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:45.708 [2024-11-18 14:24:37.573251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.708 pt2 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:45.708 [2024-11-18 14:24:37.755663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:45.708 [2024-11-18 14:24:37.755847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.708 [2024-11-18 14:24:37.755921] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:45.708 [2024-11-18 14:24:37.756216] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.708 [2024-11-18 14:24:37.756672] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.708 [2024-11-18 14:24:37.756851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:45.708 [2024-11-18 14:24:37.757029] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:45.708 [2024-11-18 14:24:37.757146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:45.708 [2024-11-18 14:24:37.757305] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:45.708 [2024-11-18 14:24:37.757396] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:45.708 [2024-11-18 14:24:37.757638] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:22:45.708 [2024-11-18 14:24:37.758525] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:45.708 [2024-11-18 14:24:37.758652] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:45.708 [2024-11-18 14:24:37.758945] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.708 pt3 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.708 14:24:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.967 14:24:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.967 "name": "raid_bdev1", 00:22:45.967 "uuid": "0c30d40e-3efc-4598-8bda-92da058480ac", 00:22:45.967 "strip_size_kb": 64, 00:22:45.967 "state": "online", 00:22:45.967 "raid_level": "raid5f", 00:22:45.967 "superblock": true, 00:22:45.967 "num_base_bdevs": 4, 00:22:45.967 "num_base_bdevs_discovered": 3, 00:22:45.967 "num_base_bdevs_operational": 3, 00:22:45.967 "base_bdevs_list": [ 00:22:45.967 { 00:22:45.967 "name": null, 00:22:45.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.967 "is_configured": false, 00:22:45.967 "data_offset": 2048, 00:22:45.967 "data_size": 63488 00:22:45.967 }, 00:22:45.967 { 00:22:45.967 "name": "pt2", 00:22:45.967 "uuid": "7980f470-d918-56ef-b074-b20775943f84", 00:22:45.967 "is_configured": true, 00:22:45.967 "data_offset": 2048, 00:22:45.967 "data_size": 63488 00:22:45.967 }, 00:22:45.967 { 00:22:45.967 "name": "pt3", 00:22:45.967 "uuid": "75b17d52-e3e1-525c-a194-773248ac821a", 00:22:45.967 "is_configured": true, 00:22:45.967 "data_offset": 2048, 00:22:45.967 "data_size": 63488 00:22:45.967 }, 00:22:45.967 { 00:22:45.967 "name": "pt4", 00:22:45.967 "uuid": "bd0aa0ee-36a8-5f59-b094-5a9eda5bfb06", 00:22:45.967 "is_configured": true, 00:22:45.967 "data_offset": 2048, 00:22:45.967 "data_size": 63488 00:22:45.967 } 00:22:45.967 ] 00:22:45.967 }' 00:22:45.967 14:24:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.967 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:22:46.567 14:24:38 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:46.567 14:24:38 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:46.833 [2024-11-18 14:24:38.725710] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.833 14:24:38 -- bdev/bdev_raid.sh@506 -- # '[' 0c30d40e-3efc-4598-8bda-92da058480ac '!=' 0c30d40e-3efc-4598-8bda-92da058480ac ']' 00:22:46.833 14:24:38 -- bdev/bdev_raid.sh@511 -- # killprocess 140136 00:22:46.833 14:24:38 -- common/autotest_common.sh@936 -- # '[' -z 140136 ']' 00:22:46.833 14:24:38 -- common/autotest_common.sh@940 -- # kill -0 140136 00:22:46.833 14:24:38 -- common/autotest_common.sh@941 -- # uname 00:22:46.833 14:24:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.833 14:24:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140136 00:22:46.833 killing process with pid 140136 00:22:46.833 14:24:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:46.833 14:24:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:46.833 14:24:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140136' 00:22:46.833 14:24:38 -- common/autotest_common.sh@955 -- # kill 140136 00:22:46.833 14:24:38 -- common/autotest_common.sh@960 -- # wait 140136 00:22:46.833 [2024-11-18 14:24:38.758335] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:46.833 [2024-11-18 14:24:38.758404] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.833 [2024-11-18 14:24:38.758479] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.833 [2024-11-18 14:24:38.758504] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:46.833 [2024-11-18 14:24:38.809490] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:47.091 ************************************ 00:22:47.091 END TEST raid5f_superblock_test 00:22:47.092 ************************************ 00:22:47.092 14:24:39 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:47.092 00:22:47.092 real 0m20.034s 00:22:47.092 user 0m37.550s 00:22:47.092 sys 0m2.413s 00:22:47.092 14:24:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:47.092 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.092 14:24:39 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:22:47.092 14:24:39 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:22:47.092 14:24:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:47.092 14:24:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:47.092 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.092 ************************************ 00:22:47.092 START TEST raid5f_rebuild_test 00:22:47.353 ************************************ 00:22:47.353 14:24:39 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=140785 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 140785 /var/tmp/spdk-raid.sock 00:22:47.353 14:24:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:47.353 14:24:39 -- common/autotest_common.sh@829 -- # '[' -z 140785 ']' 00:22:47.353 14:24:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:47.353 14:24:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.353 14:24:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:47.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:47.354 14:24:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.354 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.354 [2024-11-18 14:24:39.238082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:47.354 [2024-11-18 14:24:39.238554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140785 ] 00:22:47.354 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:47.354 Zero copy mechanism will not be used. 00:22:47.354 [2024-11-18 14:24:39.388468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.614 [2024-11-18 14:24:39.472763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.614 [2024-11-18 14:24:39.555038] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.181 14:24:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.181 14:24:40 -- common/autotest_common.sh@862 -- # return 0 00:22:48.181 14:24:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:48.181 14:24:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:48.181 14:24:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:48.439 BaseBdev1 00:22:48.439 14:24:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:48.439 14:24:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:48.439 14:24:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:48.696 BaseBdev2 00:22:48.696 14:24:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:48.696 14:24:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:48.696 14:24:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:48.953 BaseBdev3 00:22:48.953 14:24:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:48.953 14:24:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:48.953 14:24:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:49.212 BaseBdev4 00:22:49.212 14:24:41 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:49.212 spare_malloc 00:22:49.212 14:24:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:49.469 spare_delay 00:22:49.469 14:24:41 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:49.726 [2024-11-18 14:24:41.623918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:49.726 [2024-11-18 14:24:41.624274] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.726 [2024-11-18 14:24:41.624351] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:49.726 [2024-11-18 14:24:41.624674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.726 [2024-11-18 14:24:41.627261] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.726 [2024-11-18 14:24:41.627428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:49.726 spare 00:22:49.726 14:24:41 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:49.983 [2024-11-18 14:24:41.860110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.983 [2024-11-18 14:24:41.862237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.983 [2024-11-18 14:24:41.862409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:49.984 [2024-11-18 14:24:41.862491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:49.984 [2024-11-18 14:24:41.862696] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:22:49.984 [2024-11-18 14:24:41.862836] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:49.984 [2024-11-18 14:24:41.863009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:22:49.984 [2024-11-18 14:24:41.863903] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:22:49.984 [2024-11-18 14:24:41.864035] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:22:49.984 [2024-11-18 14:24:41.864342] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.984 14:24:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.984 14:24:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.984 "name": "raid_bdev1", 00:22:49.984 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:49.984 "strip_size_kb": 64, 00:22:49.984 "state": "online", 00:22:49.984 "raid_level": "raid5f", 00:22:49.984 "superblock": false, 00:22:49.984 "num_base_bdevs": 4, 00:22:49.984 "num_base_bdevs_discovered": 4, 00:22:49.984 "num_base_bdevs_operational": 4, 00:22:49.984 "base_bdevs_list": [ 00:22:49.984 { 00:22:49.984 "name": "BaseBdev1", 00:22:49.984 "uuid": "a771376e-f983-42c9-9e30-ce68b7bdfb1e", 00:22:49.984 "is_configured": true, 00:22:49.984 "data_offset": 0, 00:22:49.984 "data_size": 65536 00:22:49.984 }, 00:22:49.984 { 00:22:49.984 "name": "BaseBdev2", 00:22:49.984 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:49.984 "is_configured": true, 00:22:49.984 "data_offset": 0, 00:22:49.984 "data_size": 65536 00:22:49.984 }, 00:22:49.984 { 00:22:49.984 "name": "BaseBdev3", 00:22:49.984 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:49.984 "is_configured": true, 00:22:49.984 "data_offset": 0, 00:22:49.984 "data_size": 65536 00:22:49.984 }, 00:22:49.984 { 00:22:49.984 "name": "BaseBdev4", 00:22:49.984 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:49.984 "is_configured": true, 00:22:49.984 "data_offset": 0, 00:22:49.984 "data_size": 65536 00:22:49.984 } 00:22:49.984 ] 00:22:49.984 }' 00:22:49.984 14:24:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.984 14:24:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.917 14:24:42 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:50.918 14:24:42 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:50.918 [2024-11-18 14:24:42.796544] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.918 14:24:42 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:22:50.918 14:24:42 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.918 14:24:42 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:51.176 14:24:42 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:51.176 14:24:42 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:51.176 14:24:42 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:51.176 14:24:43 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@12 -- # local i 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:51.176 14:24:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:51.177 14:24:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:51.177 [2024-11-18 14:24:43.244557] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:51.435 /dev/nbd0 00:22:51.435 14:24:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:51.435 14:24:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:51.435 14:24:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:51.435 14:24:43 -- common/autotest_common.sh@867 -- # local i 00:22:51.435 14:24:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:51.435 14:24:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:51.435 14:24:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:51.435 14:24:43 -- common/autotest_common.sh@871 -- # break 00:22:51.435 14:24:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:51.435 14:24:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:51.435 14:24:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:51.435 1+0 records in 00:22:51.435 1+0 records out 00:22:51.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039652 s, 10.3 MB/s 00:22:51.435 14:24:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:51.435 14:24:43 -- common/autotest_common.sh@884 -- # size=4096 00:22:51.435 14:24:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:51.435 14:24:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:51.435 14:24:43 -- common/autotest_common.sh@887 -- # return 0 00:22:51.435 14:24:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:51.435 14:24:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:51.435 14:24:43 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:51.435 14:24:43 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:22:51.435 14:24:43 -- bdev/bdev_raid.sh@582 -- # echo 192 00:22:51.435 14:24:43 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:22:52.002 512+0 records in 00:22:52.002 512+0 records out 00:22:52.002 100663296 bytes (101 MB, 96 MiB) copied, 0.5004 s, 201 MB/s 00:22:52.002 14:24:43 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:52.002 14:24:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:52.002 14:24:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:52.002 14:24:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.002 14:24:43 -- bdev/nbd_common.sh@51 -- # local i 00:22:52.002 14:24:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.002 14:24:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.002 [2024-11-18 14:24:44.019759] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@41 -- # break 00:22:52.002 14:24:44 -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.002 14:24:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:52.260 [2024-11-18 14:24:44.199416] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.260 14:24:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.519 14:24:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.519 "name": "raid_bdev1", 00:22:52.519 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:52.519 "strip_size_kb": 64, 00:22:52.519 "state": "online", 00:22:52.519 "raid_level": "raid5f", 00:22:52.519 "superblock": false, 00:22:52.519 "num_base_bdevs": 4, 00:22:52.519 "num_base_bdevs_discovered": 3, 00:22:52.519 "num_base_bdevs_operational": 3, 00:22:52.519 "base_bdevs_list": [ 00:22:52.519 { 00:22:52.519 "name": null, 00:22:52.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.519 "is_configured": false, 00:22:52.519 "data_offset": 0, 00:22:52.519 "data_size": 65536 00:22:52.519 }, 00:22:52.519 { 00:22:52.519 "name": "BaseBdev2", 00:22:52.519 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:52.519 "is_configured": true, 00:22:52.519 "data_offset": 0, 00:22:52.519 "data_size": 65536 00:22:52.519 }, 00:22:52.519 { 00:22:52.519 "name": "BaseBdev3", 00:22:52.519 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:52.519 "is_configured": true, 00:22:52.519 "data_offset": 0, 00:22:52.519 "data_size": 65536 00:22:52.519 }, 00:22:52.519 { 00:22:52.519 "name": "BaseBdev4", 00:22:52.519 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:52.519 "is_configured": true, 00:22:52.519 "data_offset": 0, 00:22:52.519 "data_size": 65536 00:22:52.519 } 00:22:52.519 ] 00:22:52.519 }' 00:22:52.519 14:24:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.519 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:22:53.086 14:24:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.344 [2024-11-18 14:24:45.311626] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:53.344 [2024-11-18 14:24:45.311809] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.344 [2024-11-18 14:24:45.317135] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:22:53.344 [2024-11-18 14:24:45.319657] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.344 14:24:45 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.279 14:24:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.538 14:24:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.538 "name": "raid_bdev1", 00:22:54.538 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:54.538 "strip_size_kb": 64, 00:22:54.538 "state": "online", 00:22:54.538 "raid_level": "raid5f", 00:22:54.538 "superblock": false, 00:22:54.538 "num_base_bdevs": 4, 00:22:54.538 "num_base_bdevs_discovered": 4, 00:22:54.538 "num_base_bdevs_operational": 4, 00:22:54.538 "process": { 00:22:54.539 "type": "rebuild", 00:22:54.539 "target": "spare", 00:22:54.539 "progress": { 00:22:54.539 "blocks": 23040, 00:22:54.539 "percent": 11 00:22:54.539 } 00:22:54.539 }, 00:22:54.539 "base_bdevs_list": [ 00:22:54.539 { 00:22:54.539 "name": "spare", 00:22:54.539 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:22:54.539 "is_configured": true, 00:22:54.539 "data_offset": 0, 00:22:54.539 "data_size": 65536 00:22:54.539 }, 00:22:54.539 { 00:22:54.539 "name": "BaseBdev2", 00:22:54.539 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:54.539 "is_configured": true, 00:22:54.539 "data_offset": 0, 00:22:54.539 "data_size": 65536 00:22:54.539 }, 00:22:54.539 { 00:22:54.539 "name": "BaseBdev3", 00:22:54.539 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:54.539 "is_configured": true, 00:22:54.539 "data_offset": 0, 00:22:54.539 "data_size": 65536 00:22:54.539 }, 00:22:54.539 { 00:22:54.539 "name": "BaseBdev4", 00:22:54.539 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:54.539 "is_configured": true, 00:22:54.539 "data_offset": 0, 00:22:54.539 "data_size": 65536 00:22:54.539 } 00:22:54.539 ] 00:22:54.539 }' 00:22:54.539 14:24:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.539 14:24:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.539 14:24:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:54.797 [2024-11-18 14:24:46.805652] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.797 [2024-11-18 14:24:46.829841] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.797 [2024-11-18 14:24:46.830048] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.797 14:24:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.057 14:24:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.057 "name": "raid_bdev1", 00:22:55.057 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:55.057 "strip_size_kb": 64, 00:22:55.057 "state": "online", 00:22:55.057 "raid_level": "raid5f", 00:22:55.057 "superblock": false, 00:22:55.057 "num_base_bdevs": 4, 00:22:55.057 "num_base_bdevs_discovered": 3, 00:22:55.057 "num_base_bdevs_operational": 3, 00:22:55.057 "base_bdevs_list": [ 00:22:55.057 { 00:22:55.057 "name": null, 00:22:55.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.057 "is_configured": false, 00:22:55.057 "data_offset": 0, 00:22:55.057 "data_size": 65536 00:22:55.057 }, 00:22:55.057 { 00:22:55.057 "name": "BaseBdev2", 00:22:55.057 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:55.057 "is_configured": true, 00:22:55.057 "data_offset": 0, 00:22:55.057 "data_size": 65536 00:22:55.057 }, 00:22:55.057 { 00:22:55.057 "name": "BaseBdev3", 00:22:55.057 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:55.057 "is_configured": true, 00:22:55.057 "data_offset": 0, 00:22:55.057 "data_size": 65536 00:22:55.057 }, 00:22:55.057 { 00:22:55.057 "name": "BaseBdev4", 00:22:55.057 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:55.057 "is_configured": true, 00:22:55.057 "data_offset": 0, 00:22:55.057 "data_size": 65536 00:22:55.057 } 00:22:55.057 ] 00:22:55.057 }' 00:22:55.057 14:24:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.057 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.625 14:24:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.884 14:24:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:55.884 "name": "raid_bdev1", 00:22:55.884 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:55.884 "strip_size_kb": 64, 00:22:55.884 "state": "online", 00:22:55.884 "raid_level": "raid5f", 00:22:55.884 "superblock": false, 00:22:55.884 "num_base_bdevs": 4, 00:22:55.884 "num_base_bdevs_discovered": 3, 00:22:55.884 "num_base_bdevs_operational": 3, 00:22:55.884 "base_bdevs_list": [ 00:22:55.884 { 00:22:55.884 "name": null, 00:22:55.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.884 "is_configured": false, 00:22:55.884 "data_offset": 0, 00:22:55.884 "data_size": 65536 00:22:55.884 }, 00:22:55.884 { 00:22:55.884 "name": "BaseBdev2", 00:22:55.884 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:55.884 "is_configured": true, 00:22:55.884 "data_offset": 0, 00:22:55.884 "data_size": 65536 00:22:55.884 }, 00:22:55.884 { 00:22:55.884 "name": "BaseBdev3", 00:22:55.884 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:55.884 "is_configured": true, 00:22:55.884 "data_offset": 0, 00:22:55.884 "data_size": 65536 00:22:55.884 }, 00:22:55.884 { 00:22:55.884 "name": "BaseBdev4", 00:22:55.884 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:55.884 "is_configured": true, 00:22:55.884 "data_offset": 0, 00:22:55.884 "data_size": 65536 00:22:55.884 } 00:22:55.884 ] 00:22:55.884 }' 00:22:55.884 14:24:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:55.884 14:24:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:55.884 14:24:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.143 14:24:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:56.143 14:24:47 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.143 [2024-11-18 14:24:48.161606] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:56.143 [2024-11-18 14:24:48.161789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.143 [2024-11-18 14:24:48.164844] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:22:56.143 [2024-11-18 14:24:48.166947] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.143 14:24:48 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.519 "name": "raid_bdev1", 00:22:57.519 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:57.519 "strip_size_kb": 64, 00:22:57.519 "state": "online", 00:22:57.519 "raid_level": "raid5f", 00:22:57.519 "superblock": false, 00:22:57.519 "num_base_bdevs": 4, 00:22:57.519 "num_base_bdevs_discovered": 4, 00:22:57.519 "num_base_bdevs_operational": 4, 00:22:57.519 "process": { 00:22:57.519 "type": "rebuild", 00:22:57.519 "target": "spare", 00:22:57.519 "progress": { 00:22:57.519 "blocks": 21120, 00:22:57.519 "percent": 10 00:22:57.519 } 00:22:57.519 }, 00:22:57.519 "base_bdevs_list": [ 00:22:57.519 { 00:22:57.519 "name": "spare", 00:22:57.519 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:22:57.519 "is_configured": true, 00:22:57.519 "data_offset": 0, 00:22:57.519 "data_size": 65536 00:22:57.519 }, 00:22:57.519 { 00:22:57.519 "name": "BaseBdev2", 00:22:57.519 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:57.519 "is_configured": true, 00:22:57.519 "data_offset": 0, 00:22:57.519 "data_size": 65536 00:22:57.519 }, 00:22:57.519 { 00:22:57.519 "name": "BaseBdev3", 00:22:57.519 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:57.519 "is_configured": true, 00:22:57.519 "data_offset": 0, 00:22:57.519 "data_size": 65536 00:22:57.519 }, 00:22:57.519 { 00:22:57.519 "name": "BaseBdev4", 00:22:57.519 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:57.519 "is_configured": true, 00:22:57.519 "data_offset": 0, 00:22:57.519 "data_size": 65536 00:22:57.519 } 00:22:57.519 ] 00:22:57.519 }' 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@657 -- # local timeout=645 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.519 14:24:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.778 14:24:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.778 "name": "raid_bdev1", 00:22:57.778 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:57.778 "strip_size_kb": 64, 00:22:57.778 "state": "online", 00:22:57.778 "raid_level": "raid5f", 00:22:57.778 "superblock": false, 00:22:57.778 "num_base_bdevs": 4, 00:22:57.778 "num_base_bdevs_discovered": 4, 00:22:57.778 "num_base_bdevs_operational": 4, 00:22:57.778 "process": { 00:22:57.778 "type": "rebuild", 00:22:57.778 "target": "spare", 00:22:57.778 "progress": { 00:22:57.778 "blocks": 26880, 00:22:57.778 "percent": 13 00:22:57.778 } 00:22:57.778 }, 00:22:57.778 "base_bdevs_list": [ 00:22:57.778 { 00:22:57.778 "name": "spare", 00:22:57.778 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:22:57.778 "is_configured": true, 00:22:57.778 "data_offset": 0, 00:22:57.778 "data_size": 65536 00:22:57.778 }, 00:22:57.778 { 00:22:57.778 "name": "BaseBdev2", 00:22:57.778 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:57.778 "is_configured": true, 00:22:57.778 "data_offset": 0, 00:22:57.778 "data_size": 65536 00:22:57.778 }, 00:22:57.778 { 00:22:57.778 "name": "BaseBdev3", 00:22:57.778 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:57.778 "is_configured": true, 00:22:57.778 "data_offset": 0, 00:22:57.778 "data_size": 65536 00:22:57.778 }, 00:22:57.778 { 00:22:57.778 "name": "BaseBdev4", 00:22:57.778 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:57.778 "is_configured": true, 00:22:57.778 "data_offset": 0, 00:22:57.778 "data_size": 65536 00:22:57.778 } 00:22:57.778 ] 00:22:57.778 }' 00:22:57.778 14:24:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.778 14:24:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.778 14:24:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.778 14:24:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.778 14:24:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.714 14:24:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.971 14:24:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.971 "name": "raid_bdev1", 00:22:58.971 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:22:58.971 "strip_size_kb": 64, 00:22:58.971 "state": "online", 00:22:58.971 "raid_level": "raid5f", 00:22:58.971 "superblock": false, 00:22:58.971 "num_base_bdevs": 4, 00:22:58.971 "num_base_bdevs_discovered": 4, 00:22:58.971 "num_base_bdevs_operational": 4, 00:22:58.971 "process": { 00:22:58.971 "type": "rebuild", 00:22:58.971 "target": "spare", 00:22:58.971 "progress": { 00:22:58.971 "blocks": 53760, 00:22:58.971 "percent": 27 00:22:58.971 } 00:22:58.971 }, 00:22:58.971 "base_bdevs_list": [ 00:22:58.971 { 00:22:58.971 "name": "spare", 00:22:58.971 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:22:58.971 "is_configured": true, 00:22:58.971 "data_offset": 0, 00:22:58.971 "data_size": 65536 00:22:58.971 }, 00:22:58.971 { 00:22:58.971 "name": "BaseBdev2", 00:22:58.971 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:22:58.971 "is_configured": true, 00:22:58.971 "data_offset": 0, 00:22:58.972 "data_size": 65536 00:22:58.972 }, 00:22:58.972 { 00:22:58.972 "name": "BaseBdev3", 00:22:58.972 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:22:58.972 "is_configured": true, 00:22:58.972 "data_offset": 0, 00:22:58.972 "data_size": 65536 00:22:58.972 }, 00:22:58.972 { 00:22:58.972 "name": "BaseBdev4", 00:22:58.972 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:22:58.972 "is_configured": true, 00:22:58.972 "data_offset": 0, 00:22:58.972 "data_size": 65536 00:22:58.972 } 00:22:58.972 ] 00:22:58.972 }' 00:22:58.972 14:24:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.230 14:24:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.230 14:24:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:59.230 14:24:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.230 14:24:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.165 14:24:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.424 14:24:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:00.424 "name": "raid_bdev1", 00:23:00.424 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:00.424 "strip_size_kb": 64, 00:23:00.424 "state": "online", 00:23:00.424 "raid_level": "raid5f", 00:23:00.424 "superblock": false, 00:23:00.424 "num_base_bdevs": 4, 00:23:00.424 "num_base_bdevs_discovered": 4, 00:23:00.424 "num_base_bdevs_operational": 4, 00:23:00.424 "process": { 00:23:00.424 "type": "rebuild", 00:23:00.424 "target": "spare", 00:23:00.424 "progress": { 00:23:00.424 "blocks": 78720, 00:23:00.424 "percent": 40 00:23:00.424 } 00:23:00.424 }, 00:23:00.424 "base_bdevs_list": [ 00:23:00.424 { 00:23:00.424 "name": "spare", 00:23:00.424 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:00.424 "is_configured": true, 00:23:00.424 "data_offset": 0, 00:23:00.424 "data_size": 65536 00:23:00.424 }, 00:23:00.424 { 00:23:00.424 "name": "BaseBdev2", 00:23:00.424 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:00.424 "is_configured": true, 00:23:00.424 "data_offset": 0, 00:23:00.424 "data_size": 65536 00:23:00.424 }, 00:23:00.424 { 00:23:00.424 "name": "BaseBdev3", 00:23:00.424 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:00.424 "is_configured": true, 00:23:00.424 "data_offset": 0, 00:23:00.424 "data_size": 65536 00:23:00.424 }, 00:23:00.424 { 00:23:00.424 "name": "BaseBdev4", 00:23:00.424 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:00.424 "is_configured": true, 00:23:00.424 "data_offset": 0, 00:23:00.424 "data_size": 65536 00:23:00.424 } 00:23:00.424 ] 00:23:00.424 }' 00:23:00.424 14:24:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:00.424 14:24:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.424 14:24:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:00.424 14:24:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.424 14:24:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.361 14:24:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.619 14:24:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.619 "name": "raid_bdev1", 00:23:01.619 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:01.619 "strip_size_kb": 64, 00:23:01.619 "state": "online", 00:23:01.619 "raid_level": "raid5f", 00:23:01.619 "superblock": false, 00:23:01.619 "num_base_bdevs": 4, 00:23:01.619 "num_base_bdevs_discovered": 4, 00:23:01.619 "num_base_bdevs_operational": 4, 00:23:01.619 "process": { 00:23:01.619 "type": "rebuild", 00:23:01.619 "target": "spare", 00:23:01.619 "progress": { 00:23:01.619 "blocks": 103680, 00:23:01.619 "percent": 52 00:23:01.619 } 00:23:01.619 }, 00:23:01.619 "base_bdevs_list": [ 00:23:01.619 { 00:23:01.619 "name": "spare", 00:23:01.619 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:01.619 "is_configured": true, 00:23:01.619 "data_offset": 0, 00:23:01.619 "data_size": 65536 00:23:01.620 }, 00:23:01.620 { 00:23:01.620 "name": "BaseBdev2", 00:23:01.620 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:01.620 "is_configured": true, 00:23:01.620 "data_offset": 0, 00:23:01.620 "data_size": 65536 00:23:01.620 }, 00:23:01.620 { 00:23:01.620 "name": "BaseBdev3", 00:23:01.620 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:01.620 "is_configured": true, 00:23:01.620 "data_offset": 0, 00:23:01.620 "data_size": 65536 00:23:01.620 }, 00:23:01.620 { 00:23:01.620 "name": "BaseBdev4", 00:23:01.620 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:01.620 "is_configured": true, 00:23:01.620 "data_offset": 0, 00:23:01.620 "data_size": 65536 00:23:01.620 } 00:23:01.620 ] 00:23:01.620 }' 00:23:01.620 14:24:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:01.620 14:24:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.620 14:24:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:01.878 14:24:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.878 14:24:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.814 14:24:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.073 14:24:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:03.073 "name": "raid_bdev1", 00:23:03.073 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:03.073 "strip_size_kb": 64, 00:23:03.073 "state": "online", 00:23:03.073 "raid_level": "raid5f", 00:23:03.073 "superblock": false, 00:23:03.073 "num_base_bdevs": 4, 00:23:03.073 "num_base_bdevs_discovered": 4, 00:23:03.073 "num_base_bdevs_operational": 4, 00:23:03.073 "process": { 00:23:03.073 "type": "rebuild", 00:23:03.073 "target": "spare", 00:23:03.073 "progress": { 00:23:03.073 "blocks": 126720, 00:23:03.073 "percent": 64 00:23:03.073 } 00:23:03.073 }, 00:23:03.073 "base_bdevs_list": [ 00:23:03.073 { 00:23:03.073 "name": "spare", 00:23:03.073 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:03.073 "is_configured": true, 00:23:03.073 "data_offset": 0, 00:23:03.073 "data_size": 65536 00:23:03.073 }, 00:23:03.073 { 00:23:03.073 "name": "BaseBdev2", 00:23:03.073 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:03.073 "is_configured": true, 00:23:03.073 "data_offset": 0, 00:23:03.073 "data_size": 65536 00:23:03.073 }, 00:23:03.073 { 00:23:03.073 "name": "BaseBdev3", 00:23:03.073 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:03.073 "is_configured": true, 00:23:03.073 "data_offset": 0, 00:23:03.073 "data_size": 65536 00:23:03.073 }, 00:23:03.073 { 00:23:03.073 "name": "BaseBdev4", 00:23:03.073 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:03.073 "is_configured": true, 00:23:03.073 "data_offset": 0, 00:23:03.073 "data_size": 65536 00:23:03.073 } 00:23:03.073 ] 00:23:03.073 }' 00:23:03.073 14:24:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:03.073 14:24:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.073 14:24:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:03.073 14:24:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.073 14:24:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.009 14:24:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.268 14:24:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:04.268 "name": "raid_bdev1", 00:23:04.268 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:04.268 "strip_size_kb": 64, 00:23:04.268 "state": "online", 00:23:04.268 "raid_level": "raid5f", 00:23:04.268 "superblock": false, 00:23:04.268 "num_base_bdevs": 4, 00:23:04.268 "num_base_bdevs_discovered": 4, 00:23:04.268 "num_base_bdevs_operational": 4, 00:23:04.268 "process": { 00:23:04.268 "type": "rebuild", 00:23:04.268 "target": "spare", 00:23:04.268 "progress": { 00:23:04.268 "blocks": 153600, 00:23:04.268 "percent": 78 00:23:04.268 } 00:23:04.268 }, 00:23:04.268 "base_bdevs_list": [ 00:23:04.268 { 00:23:04.268 "name": "spare", 00:23:04.268 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:04.268 "is_configured": true, 00:23:04.268 "data_offset": 0, 00:23:04.268 "data_size": 65536 00:23:04.268 }, 00:23:04.268 { 00:23:04.268 "name": "BaseBdev2", 00:23:04.268 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:04.268 "is_configured": true, 00:23:04.268 "data_offset": 0, 00:23:04.268 "data_size": 65536 00:23:04.268 }, 00:23:04.268 { 00:23:04.268 "name": "BaseBdev3", 00:23:04.268 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:04.268 "is_configured": true, 00:23:04.268 "data_offset": 0, 00:23:04.268 "data_size": 65536 00:23:04.268 }, 00:23:04.268 { 00:23:04.268 "name": "BaseBdev4", 00:23:04.268 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:04.268 "is_configured": true, 00:23:04.268 "data_offset": 0, 00:23:04.268 "data_size": 65536 00:23:04.268 } 00:23:04.268 ] 00:23:04.268 }' 00:23:04.268 14:24:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:04.268 14:24:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.268 14:24:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:04.527 14:24:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.527 14:24:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.463 14:24:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.722 14:24:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:05.722 "name": "raid_bdev1", 00:23:05.722 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:05.722 "strip_size_kb": 64, 00:23:05.722 "state": "online", 00:23:05.722 "raid_level": "raid5f", 00:23:05.722 "superblock": false, 00:23:05.722 "num_base_bdevs": 4, 00:23:05.722 "num_base_bdevs_discovered": 4, 00:23:05.722 "num_base_bdevs_operational": 4, 00:23:05.722 "process": { 00:23:05.722 "type": "rebuild", 00:23:05.722 "target": "spare", 00:23:05.722 "progress": { 00:23:05.722 "blocks": 178560, 00:23:05.722 "percent": 90 00:23:05.722 } 00:23:05.722 }, 00:23:05.722 "base_bdevs_list": [ 00:23:05.722 { 00:23:05.722 "name": "spare", 00:23:05.722 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:05.722 "is_configured": true, 00:23:05.722 "data_offset": 0, 00:23:05.722 "data_size": 65536 00:23:05.722 }, 00:23:05.722 { 00:23:05.722 "name": "BaseBdev2", 00:23:05.722 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:05.722 "is_configured": true, 00:23:05.722 "data_offset": 0, 00:23:05.722 "data_size": 65536 00:23:05.722 }, 00:23:05.722 { 00:23:05.722 "name": "BaseBdev3", 00:23:05.722 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:05.722 "is_configured": true, 00:23:05.722 "data_offset": 0, 00:23:05.722 "data_size": 65536 00:23:05.722 }, 00:23:05.722 { 00:23:05.722 "name": "BaseBdev4", 00:23:05.722 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:05.722 "is_configured": true, 00:23:05.722 "data_offset": 0, 00:23:05.722 "data_size": 65536 00:23:05.722 } 00:23:05.722 ] 00:23:05.722 }' 00:23:05.722 14:24:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:05.722 14:24:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.722 14:24:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:05.722 14:24:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.722 14:24:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:06.658 [2024-11-18 14:24:58.532713] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:06.658 [2024-11-18 14:24:58.532920] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:06.658 [2024-11-18 14:24:58.533097] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.658 14:24:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:06.917 "name": "raid_bdev1", 00:23:06.917 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:06.917 "strip_size_kb": 64, 00:23:06.917 "state": "online", 00:23:06.917 "raid_level": "raid5f", 00:23:06.917 "superblock": false, 00:23:06.917 "num_base_bdevs": 4, 00:23:06.917 "num_base_bdevs_discovered": 4, 00:23:06.917 "num_base_bdevs_operational": 4, 00:23:06.917 "base_bdevs_list": [ 00:23:06.917 { 00:23:06.917 "name": "spare", 00:23:06.917 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:06.917 "is_configured": true, 00:23:06.917 "data_offset": 0, 00:23:06.917 "data_size": 65536 00:23:06.917 }, 00:23:06.917 { 00:23:06.917 "name": "BaseBdev2", 00:23:06.917 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:06.917 "is_configured": true, 00:23:06.917 "data_offset": 0, 00:23:06.917 "data_size": 65536 00:23:06.917 }, 00:23:06.917 { 00:23:06.917 "name": "BaseBdev3", 00:23:06.917 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:06.917 "is_configured": true, 00:23:06.917 "data_offset": 0, 00:23:06.917 "data_size": 65536 00:23:06.917 }, 00:23:06.917 { 00:23:06.917 "name": "BaseBdev4", 00:23:06.917 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:06.917 "is_configured": true, 00:23:06.917 "data_offset": 0, 00:23:06.917 "data_size": 65536 00:23:06.917 } 00:23:06.917 ] 00:23:06.917 }' 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@660 -- # break 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.917 14:24:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.176 14:24:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:07.176 "name": "raid_bdev1", 00:23:07.176 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:07.176 "strip_size_kb": 64, 00:23:07.176 "state": "online", 00:23:07.176 "raid_level": "raid5f", 00:23:07.176 "superblock": false, 00:23:07.176 "num_base_bdevs": 4, 00:23:07.176 "num_base_bdevs_discovered": 4, 00:23:07.176 "num_base_bdevs_operational": 4, 00:23:07.176 "base_bdevs_list": [ 00:23:07.176 { 00:23:07.176 "name": "spare", 00:23:07.176 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:07.176 "is_configured": true, 00:23:07.176 "data_offset": 0, 00:23:07.176 "data_size": 65536 00:23:07.176 }, 00:23:07.176 { 00:23:07.176 "name": "BaseBdev2", 00:23:07.176 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:07.176 "is_configured": true, 00:23:07.176 "data_offset": 0, 00:23:07.176 "data_size": 65536 00:23:07.176 }, 00:23:07.176 { 00:23:07.176 "name": "BaseBdev3", 00:23:07.176 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:07.176 "is_configured": true, 00:23:07.176 "data_offset": 0, 00:23:07.176 "data_size": 65536 00:23:07.176 }, 00:23:07.176 { 00:23:07.176 "name": "BaseBdev4", 00:23:07.176 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:07.176 "is_configured": true, 00:23:07.176 "data_offset": 0, 00:23:07.176 "data_size": 65536 00:23:07.176 } 00:23:07.176 ] 00:23:07.176 }' 00:23:07.176 14:24:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.434 14:24:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.434 "name": "raid_bdev1", 00:23:07.434 "uuid": "98d4cd0d-b9a0-4240-9a39-360fbaeb029c", 00:23:07.434 "strip_size_kb": 64, 00:23:07.434 "state": "online", 00:23:07.434 "raid_level": "raid5f", 00:23:07.434 "superblock": false, 00:23:07.434 "num_base_bdevs": 4, 00:23:07.434 "num_base_bdevs_discovered": 4, 00:23:07.434 "num_base_bdevs_operational": 4, 00:23:07.434 "base_bdevs_list": [ 00:23:07.434 { 00:23:07.434 "name": "spare", 00:23:07.434 "uuid": "c67a8827-af6e-50b6-9320-f301553dd75d", 00:23:07.434 "is_configured": true, 00:23:07.434 "data_offset": 0, 00:23:07.434 "data_size": 65536 00:23:07.434 }, 00:23:07.434 { 00:23:07.434 "name": "BaseBdev2", 00:23:07.434 "uuid": "378c2cb2-3b1c-4197-8243-254675b41b3b", 00:23:07.434 "is_configured": true, 00:23:07.434 "data_offset": 0, 00:23:07.434 "data_size": 65536 00:23:07.434 }, 00:23:07.434 { 00:23:07.434 "name": "BaseBdev3", 00:23:07.434 "uuid": "98dfeb6b-231c-41cc-90e8-df08559e0e96", 00:23:07.434 "is_configured": true, 00:23:07.434 "data_offset": 0, 00:23:07.434 "data_size": 65536 00:23:07.435 }, 00:23:07.435 { 00:23:07.435 "name": "BaseBdev4", 00:23:07.435 "uuid": "1d139f93-34ed-44a5-910c-894f191447d5", 00:23:07.435 "is_configured": true, 00:23:07.435 "data_offset": 0, 00:23:07.435 "data_size": 65536 00:23:07.435 } 00:23:07.435 ] 00:23:07.435 }' 00:23:07.435 14:24:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.435 14:24:59 -- common/autotest_common.sh@10 -- # set +x 00:23:08.370 14:25:00 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:08.370 [2024-11-18 14:25:00.412595] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.370 [2024-11-18 14:25:00.412742] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:08.370 [2024-11-18 14:25:00.412940] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.370 [2024-11-18 14:25:00.413123] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.370 [2024-11-18 14:25:00.413226] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:23:08.370 14:25:00 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.370 14:25:00 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:08.629 14:25:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:08.629 14:25:00 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:08.629 14:25:00 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@12 -- # local i 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:08.629 14:25:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:08.888 /dev/nbd0 00:23:08.889 14:25:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:08.889 14:25:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:08.889 14:25:00 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:08.889 14:25:00 -- common/autotest_common.sh@867 -- # local i 00:23:08.889 14:25:00 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:08.889 14:25:00 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:08.889 14:25:00 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:08.889 14:25:00 -- common/autotest_common.sh@871 -- # break 00:23:08.889 14:25:00 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:08.889 14:25:00 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:08.889 14:25:00 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:08.889 1+0 records in 00:23:08.889 1+0 records out 00:23:08.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512895 s, 8.0 MB/s 00:23:08.889 14:25:00 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.889 14:25:00 -- common/autotest_common.sh@884 -- # size=4096 00:23:08.889 14:25:00 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:08.889 14:25:00 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:08.889 14:25:00 -- common/autotest_common.sh@887 -- # return 0 00:23:08.889 14:25:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:08.889 14:25:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:08.889 14:25:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:09.148 /dev/nbd1 00:23:09.148 14:25:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.148 14:25:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.148 14:25:01 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:09.148 14:25:01 -- common/autotest_common.sh@867 -- # local i 00:23:09.148 14:25:01 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:09.148 14:25:01 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:09.148 14:25:01 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:09.148 14:25:01 -- common/autotest_common.sh@871 -- # break 00:23:09.148 14:25:01 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:09.148 14:25:01 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:09.148 14:25:01 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.148 1+0 records in 00:23:09.148 1+0 records out 00:23:09.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604608 s, 6.8 MB/s 00:23:09.148 14:25:01 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.148 14:25:01 -- common/autotest_common.sh@884 -- # size=4096 00:23:09.148 14:25:01 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:09.148 14:25:01 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:09.148 14:25:01 -- common/autotest_common.sh@887 -- # return 0 00:23:09.148 14:25:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.149 14:25:01 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:09.149 14:25:01 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@51 -- # local i 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.149 14:25:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@41 -- # break 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.407 14:25:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@41 -- # break 00:23:09.666 14:25:01 -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.666 14:25:01 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:09.666 14:25:01 -- bdev/bdev_raid.sh@709 -- # killprocess 140785 00:23:09.666 14:25:01 -- common/autotest_common.sh@936 -- # '[' -z 140785 ']' 00:23:09.666 14:25:01 -- common/autotest_common.sh@940 -- # kill -0 140785 00:23:09.666 14:25:01 -- common/autotest_common.sh@941 -- # uname 00:23:09.666 14:25:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.666 14:25:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140785 00:23:09.666 14:25:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:09.666 14:25:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:09.666 14:25:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140785' 00:23:09.666 killing process with pid 140785 00:23:09.666 14:25:01 -- common/autotest_common.sh@955 -- # kill 140785 00:23:09.666 Received shutdown signal, test time was about 60.000000 seconds 00:23:09.666 00:23:09.666 Latency(us) 00:23:09.666 [2024-11-18T14:25:01.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.666 [2024-11-18T14:25:01.740Z] =================================================================================================================== 00:23:09.666 [2024-11-18T14:25:01.740Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.666 14:25:01 -- common/autotest_common.sh@960 -- # wait 140785 00:23:09.666 [2024-11-18 14:25:01.730710] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:09.925 [2024-11-18 14:25:01.785569] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:10.184 00:23:10.184 real 0m22.946s 00:23:10.184 user 0m33.445s 00:23:10.184 sys 0m2.820s 00:23:10.184 14:25:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:10.184 14:25:02 -- common/autotest_common.sh@10 -- # set +x 00:23:10.184 ************************************ 00:23:10.184 END TEST raid5f_rebuild_test 00:23:10.184 ************************************ 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:23:10.184 14:25:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:10.184 14:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:10.184 14:25:02 -- common/autotest_common.sh@10 -- # set +x 00:23:10.184 ************************************ 00:23:10.184 START TEST raid5f_rebuild_test_sb 00:23:10.184 ************************************ 00:23:10.184 14:25:02 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@544 -- # raid_pid=141387 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@545 -- # waitforlisten 141387 /var/tmp/spdk-raid.sock 00:23:10.184 14:25:02 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:10.184 14:25:02 -- common/autotest_common.sh@829 -- # '[' -z 141387 ']' 00:23:10.184 14:25:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:10.184 14:25:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.184 14:25:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:10.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:10.184 14:25:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.184 14:25:02 -- common/autotest_common.sh@10 -- # set +x 00:23:10.184 [2024-11-18 14:25:02.249766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:10.184 [2024-11-18 14:25:02.250206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141387 ] 00:23:10.184 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:10.184 Zero copy mechanism will not be used. 00:23:10.443 [2024-11-18 14:25:02.396451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.443 [2024-11-18 14:25:02.467319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.702 [2024-11-18 14:25:02.537007] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.269 14:25:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.269 14:25:03 -- common/autotest_common.sh@862 -- # return 0 00:23:11.269 14:25:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:11.269 14:25:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:11.269 14:25:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:11.528 BaseBdev1_malloc 00:23:11.528 14:25:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:11.787 [2024-11-18 14:25:03.712679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:11.787 [2024-11-18 14:25:03.713026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.787 [2024-11-18 14:25:03.713108] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:11.787 [2024-11-18 14:25:03.713421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.787 [2024-11-18 14:25:03.716039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.787 [2024-11-18 14:25:03.716219] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:11.787 BaseBdev1 00:23:11.787 14:25:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:11.788 14:25:03 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:11.788 14:25:03 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:12.047 BaseBdev2_malloc 00:23:12.047 14:25:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:12.047 [2024-11-18 14:25:04.102019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:12.047 [2024-11-18 14:25:04.102251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.047 [2024-11-18 14:25:04.102326] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:12.047 [2024-11-18 14:25:04.102471] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.047 [2024-11-18 14:25:04.104700] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.047 [2024-11-18 14:25:04.104870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:12.047 BaseBdev2 00:23:12.047 14:25:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:12.047 14:25:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:12.047 14:25:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:12.305 BaseBdev3_malloc 00:23:12.305 14:25:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:12.564 [2024-11-18 14:25:04.505376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:12.565 [2024-11-18 14:25:04.505573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.565 [2024-11-18 14:25:04.505646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:12.565 [2024-11-18 14:25:04.505771] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.565 [2024-11-18 14:25:04.508115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.565 [2024-11-18 14:25:04.508261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:12.565 BaseBdev3 00:23:12.565 14:25:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:12.565 14:25:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:12.565 14:25:04 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:12.824 BaseBdev4_malloc 00:23:12.824 14:25:04 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:12.824 [2024-11-18 14:25:04.894672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:12.824 [2024-11-18 14:25:04.894874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.824 [2024-11-18 14:25:04.894942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:12.824 [2024-11-18 14:25:04.895066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.824 [2024-11-18 14:25:04.897304] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.824 [2024-11-18 14:25:04.897465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:13.112 BaseBdev4 00:23:13.112 14:25:04 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:13.112 spare_malloc 00:23:13.112 14:25:05 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:13.370 spare_delay 00:23:13.370 14:25:05 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:13.649 [2024-11-18 14:25:05.471791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:13.649 [2024-11-18 14:25:05.471993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.649 [2024-11-18 14:25:05.472063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:13.649 [2024-11-18 14:25:05.472200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.649 [2024-11-18 14:25:05.474513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.649 [2024-11-18 14:25:05.474675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:13.649 spare 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:13.649 [2024-11-18 14:25:05.659907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:13.649 [2024-11-18 14:25:05.661985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:13.649 [2024-11-18 14:25:05.662149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:13.649 [2024-11-18 14:25:05.662266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:13.649 [2024-11-18 14:25:05.662603] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:13.649 [2024-11-18 14:25:05.662649] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:13.649 [2024-11-18 14:25:05.662859] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:13.649 [2024-11-18 14:25:05.663735] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:13.649 [2024-11-18 14:25:05.663842] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:23:13.649 [2024-11-18 14:25:05.664167] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.649 14:25:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.961 14:25:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.961 "name": "raid_bdev1", 00:23:13.961 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:13.961 "strip_size_kb": 64, 00:23:13.961 "state": "online", 00:23:13.961 "raid_level": "raid5f", 00:23:13.961 "superblock": true, 00:23:13.961 "num_base_bdevs": 4, 00:23:13.961 "num_base_bdevs_discovered": 4, 00:23:13.961 "num_base_bdevs_operational": 4, 00:23:13.961 "base_bdevs_list": [ 00:23:13.961 { 00:23:13.961 "name": "BaseBdev1", 00:23:13.961 "uuid": "2ff07f05-e18f-5ee0-9b9a-7718d11f8ee1", 00:23:13.961 "is_configured": true, 00:23:13.961 "data_offset": 2048, 00:23:13.961 "data_size": 63488 00:23:13.961 }, 00:23:13.961 { 00:23:13.961 "name": "BaseBdev2", 00:23:13.961 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:13.961 "is_configured": true, 00:23:13.961 "data_offset": 2048, 00:23:13.961 "data_size": 63488 00:23:13.961 }, 00:23:13.961 { 00:23:13.961 "name": "BaseBdev3", 00:23:13.961 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:13.961 "is_configured": true, 00:23:13.961 "data_offset": 2048, 00:23:13.961 "data_size": 63488 00:23:13.961 }, 00:23:13.961 { 00:23:13.961 "name": "BaseBdev4", 00:23:13.961 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:13.961 "is_configured": true, 00:23:13.961 "data_offset": 2048, 00:23:13.961 "data_size": 63488 00:23:13.961 } 00:23:13.961 ] 00:23:13.961 }' 00:23:13.961 14:25:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.961 14:25:05 -- common/autotest_common.sh@10 -- # set +x 00:23:14.530 14:25:06 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:14.530 14:25:06 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:14.788 [2024-11-18 14:25:06.656394] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.788 14:25:06 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:23:14.788 14:25:06 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.788 14:25:06 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:15.047 14:25:06 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:15.047 14:25:06 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:15.047 14:25:06 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:15.047 14:25:06 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@12 -- # local i 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:15.047 14:25:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:15.047 [2024-11-18 14:25:07.108395] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:23:15.306 /dev/nbd0 00:23:15.306 14:25:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:15.306 14:25:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:15.306 14:25:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:15.306 14:25:07 -- common/autotest_common.sh@867 -- # local i 00:23:15.306 14:25:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:15.306 14:25:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:15.306 14:25:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:15.306 14:25:07 -- common/autotest_common.sh@871 -- # break 00:23:15.306 14:25:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:15.306 14:25:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:15.306 14:25:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:15.306 1+0 records in 00:23:15.306 1+0 records out 00:23:15.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389232 s, 10.5 MB/s 00:23:15.306 14:25:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.306 14:25:07 -- common/autotest_common.sh@884 -- # size=4096 00:23:15.306 14:25:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:15.306 14:25:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:15.306 14:25:07 -- common/autotest_common.sh@887 -- # return 0 00:23:15.306 14:25:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:15.306 14:25:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:15.306 14:25:07 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:15.306 14:25:07 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:23:15.306 14:25:07 -- bdev/bdev_raid.sh@582 -- # echo 192 00:23:15.306 14:25:07 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:23:15.873 496+0 records in 00:23:15.873 496+0 records out 00:23:15.873 97517568 bytes (98 MB, 93 MiB) copied, 0.523991 s, 186 MB/s 00:23:15.873 14:25:07 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@51 -- # local i 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:15.873 [2024-11-18 14:25:07.923620] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@41 -- # break 00:23:15.873 14:25:07 -- bdev/nbd_common.sh@45 -- # return 0 00:23:15.873 14:25:07 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:16.132 [2024-11-18 14:25:08.175132] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.132 14:25:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.392 14:25:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.392 "name": "raid_bdev1", 00:23:16.392 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:16.392 "strip_size_kb": 64, 00:23:16.392 "state": "online", 00:23:16.392 "raid_level": "raid5f", 00:23:16.392 "superblock": true, 00:23:16.392 "num_base_bdevs": 4, 00:23:16.392 "num_base_bdevs_discovered": 3, 00:23:16.392 "num_base_bdevs_operational": 3, 00:23:16.392 "base_bdevs_list": [ 00:23:16.392 { 00:23:16.392 "name": null, 00:23:16.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.392 "is_configured": false, 00:23:16.392 "data_offset": 2048, 00:23:16.392 "data_size": 63488 00:23:16.392 }, 00:23:16.392 { 00:23:16.392 "name": "BaseBdev2", 00:23:16.392 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:16.392 "is_configured": true, 00:23:16.392 "data_offset": 2048, 00:23:16.392 "data_size": 63488 00:23:16.392 }, 00:23:16.392 { 00:23:16.392 "name": "BaseBdev3", 00:23:16.392 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:16.392 "is_configured": true, 00:23:16.392 "data_offset": 2048, 00:23:16.392 "data_size": 63488 00:23:16.392 }, 00:23:16.392 { 00:23:16.392 "name": "BaseBdev4", 00:23:16.392 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:16.392 "is_configured": true, 00:23:16.392 "data_offset": 2048, 00:23:16.392 "data_size": 63488 00:23:16.392 } 00:23:16.392 ] 00:23:16.392 }' 00:23:16.392 14:25:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.392 14:25:08 -- common/autotest_common.sh@10 -- # set +x 00:23:16.959 14:25:08 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.218 [2024-11-18 14:25:09.155339] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:17.218 [2024-11-18 14:25:09.155525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.218 [2024-11-18 14:25:09.161031] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:23:17.218 [2024-11-18 14:25:09.163623] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.218 14:25:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.154 14:25:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.413 14:25:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.413 "name": "raid_bdev1", 00:23:18.413 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:18.413 "strip_size_kb": 64, 00:23:18.413 "state": "online", 00:23:18.413 "raid_level": "raid5f", 00:23:18.413 "superblock": true, 00:23:18.413 "num_base_bdevs": 4, 00:23:18.413 "num_base_bdevs_discovered": 4, 00:23:18.413 "num_base_bdevs_operational": 4, 00:23:18.413 "process": { 00:23:18.413 "type": "rebuild", 00:23:18.413 "target": "spare", 00:23:18.413 "progress": { 00:23:18.413 "blocks": 21120, 00:23:18.413 "percent": 11 00:23:18.413 } 00:23:18.413 }, 00:23:18.413 "base_bdevs_list": [ 00:23:18.413 { 00:23:18.413 "name": "spare", 00:23:18.413 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:18.413 "is_configured": true, 00:23:18.413 "data_offset": 2048, 00:23:18.413 "data_size": 63488 00:23:18.413 }, 00:23:18.413 { 00:23:18.413 "name": "BaseBdev2", 00:23:18.413 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:18.413 "is_configured": true, 00:23:18.413 "data_offset": 2048, 00:23:18.413 "data_size": 63488 00:23:18.413 }, 00:23:18.413 { 00:23:18.413 "name": "BaseBdev3", 00:23:18.413 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:18.413 "is_configured": true, 00:23:18.413 "data_offset": 2048, 00:23:18.413 "data_size": 63488 00:23:18.413 }, 00:23:18.413 { 00:23:18.413 "name": "BaseBdev4", 00:23:18.413 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:18.413 "is_configured": true, 00:23:18.413 "data_offset": 2048, 00:23:18.413 "data_size": 63488 00:23:18.413 } 00:23:18.413 ] 00:23:18.413 }' 00:23:18.413 14:25:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.413 14:25:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.413 14:25:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.413 14:25:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.413 14:25:10 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:18.672 [2024-11-18 14:25:10.693448] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:18.930 [2024-11-18 14:25:10.775483] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:18.930 [2024-11-18 14:25:10.775688] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.930 14:25:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.931 14:25:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.931 14:25:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.931 14:25:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.931 14:25:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.931 "name": "raid_bdev1", 00:23:18.931 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:18.931 "strip_size_kb": 64, 00:23:18.931 "state": "online", 00:23:18.931 "raid_level": "raid5f", 00:23:18.931 "superblock": true, 00:23:18.931 "num_base_bdevs": 4, 00:23:18.931 "num_base_bdevs_discovered": 3, 00:23:18.931 "num_base_bdevs_operational": 3, 00:23:18.931 "base_bdevs_list": [ 00:23:18.931 { 00:23:18.931 "name": null, 00:23:18.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.931 "is_configured": false, 00:23:18.931 "data_offset": 2048, 00:23:18.931 "data_size": 63488 00:23:18.931 }, 00:23:18.931 { 00:23:18.931 "name": "BaseBdev2", 00:23:18.931 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:18.931 "is_configured": true, 00:23:18.931 "data_offset": 2048, 00:23:18.931 "data_size": 63488 00:23:18.931 }, 00:23:18.931 { 00:23:18.931 "name": "BaseBdev3", 00:23:18.931 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:18.931 "is_configured": true, 00:23:18.931 "data_offset": 2048, 00:23:18.931 "data_size": 63488 00:23:18.931 }, 00:23:18.931 { 00:23:18.931 "name": "BaseBdev4", 00:23:18.931 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:18.931 "is_configured": true, 00:23:18.931 "data_offset": 2048, 00:23:18.931 "data_size": 63488 00:23:18.931 } 00:23:18.931 ] 00:23:18.931 }' 00:23:18.931 14:25:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.931 14:25:10 -- common/autotest_common.sh@10 -- # set +x 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:19.867 "name": "raid_bdev1", 00:23:19.867 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:19.867 "strip_size_kb": 64, 00:23:19.867 "state": "online", 00:23:19.867 "raid_level": "raid5f", 00:23:19.867 "superblock": true, 00:23:19.867 "num_base_bdevs": 4, 00:23:19.867 "num_base_bdevs_discovered": 3, 00:23:19.867 "num_base_bdevs_operational": 3, 00:23:19.867 "base_bdevs_list": [ 00:23:19.867 { 00:23:19.867 "name": null, 00:23:19.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.867 "is_configured": false, 00:23:19.867 "data_offset": 2048, 00:23:19.867 "data_size": 63488 00:23:19.867 }, 00:23:19.867 { 00:23:19.867 "name": "BaseBdev2", 00:23:19.867 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:19.867 "is_configured": true, 00:23:19.867 "data_offset": 2048, 00:23:19.867 "data_size": 63488 00:23:19.867 }, 00:23:19.867 { 00:23:19.867 "name": "BaseBdev3", 00:23:19.867 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:19.867 "is_configured": true, 00:23:19.867 "data_offset": 2048, 00:23:19.867 "data_size": 63488 00:23:19.867 }, 00:23:19.867 { 00:23:19.867 "name": "BaseBdev4", 00:23:19.867 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:19.867 "is_configured": true, 00:23:19.867 "data_offset": 2048, 00:23:19.867 "data_size": 63488 00:23:19.867 } 00:23:19.867 ] 00:23:19.867 }' 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:19.867 14:25:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.126 [2024-11-18 14:25:12.089221] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:20.126 [2024-11-18 14:25:12.090134] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.126 [2024-11-18 14:25:12.096513] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:23:20.126 [2024-11-18 14:25:12.102308] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.126 14:25:12 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.060 14:25:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.318 14:25:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:21.318 "name": "raid_bdev1", 00:23:21.318 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:21.318 "strip_size_kb": 64, 00:23:21.318 "state": "online", 00:23:21.318 "raid_level": "raid5f", 00:23:21.318 "superblock": true, 00:23:21.318 "num_base_bdevs": 4, 00:23:21.318 "num_base_bdevs_discovered": 4, 00:23:21.318 "num_base_bdevs_operational": 4, 00:23:21.318 "process": { 00:23:21.318 "type": "rebuild", 00:23:21.318 "target": "spare", 00:23:21.318 "progress": { 00:23:21.318 "blocks": 23040, 00:23:21.318 "percent": 12 00:23:21.318 } 00:23:21.318 }, 00:23:21.318 "base_bdevs_list": [ 00:23:21.318 { 00:23:21.318 "name": "spare", 00:23:21.318 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:21.318 "is_configured": true, 00:23:21.318 "data_offset": 2048, 00:23:21.318 "data_size": 63488 00:23:21.318 }, 00:23:21.318 { 00:23:21.318 "name": "BaseBdev2", 00:23:21.318 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:21.318 "is_configured": true, 00:23:21.318 "data_offset": 2048, 00:23:21.318 "data_size": 63488 00:23:21.318 }, 00:23:21.318 { 00:23:21.318 "name": "BaseBdev3", 00:23:21.318 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:21.318 "is_configured": true, 00:23:21.318 "data_offset": 2048, 00:23:21.318 "data_size": 63488 00:23:21.318 }, 00:23:21.318 { 00:23:21.318 "name": "BaseBdev4", 00:23:21.318 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:21.318 "is_configured": true, 00:23:21.318 "data_offset": 2048, 00:23:21.318 "data_size": 63488 00:23:21.318 } 00:23:21.318 ] 00:23:21.318 }' 00:23:21.318 14:25:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:21.577 14:25:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.577 14:25:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:21.577 14:25:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:21.577 14:25:13 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:21.577 14:25:13 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:21.578 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@657 -- # local timeout=669 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.578 14:25:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.837 14:25:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:21.837 "name": "raid_bdev1", 00:23:21.837 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:21.837 "strip_size_kb": 64, 00:23:21.837 "state": "online", 00:23:21.837 "raid_level": "raid5f", 00:23:21.837 "superblock": true, 00:23:21.837 "num_base_bdevs": 4, 00:23:21.837 "num_base_bdevs_discovered": 4, 00:23:21.837 "num_base_bdevs_operational": 4, 00:23:21.837 "process": { 00:23:21.837 "type": "rebuild", 00:23:21.837 "target": "spare", 00:23:21.837 "progress": { 00:23:21.837 "blocks": 28800, 00:23:21.837 "percent": 15 00:23:21.837 } 00:23:21.837 }, 00:23:21.837 "base_bdevs_list": [ 00:23:21.837 { 00:23:21.837 "name": "spare", 00:23:21.837 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:21.837 "is_configured": true, 00:23:21.837 "data_offset": 2048, 00:23:21.837 "data_size": 63488 00:23:21.837 }, 00:23:21.837 { 00:23:21.837 "name": "BaseBdev2", 00:23:21.837 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:21.837 "is_configured": true, 00:23:21.837 "data_offset": 2048, 00:23:21.837 "data_size": 63488 00:23:21.837 }, 00:23:21.837 { 00:23:21.837 "name": "BaseBdev3", 00:23:21.837 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:21.837 "is_configured": true, 00:23:21.837 "data_offset": 2048, 00:23:21.837 "data_size": 63488 00:23:21.837 }, 00:23:21.837 { 00:23:21.837 "name": "BaseBdev4", 00:23:21.837 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:21.837 "is_configured": true, 00:23:21.837 "data_offset": 2048, 00:23:21.837 "data_size": 63488 00:23:21.837 } 00:23:21.837 ] 00:23:21.837 }' 00:23:21.837 14:25:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:21.837 14:25:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:21.837 14:25:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:21.837 14:25:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:21.837 14:25:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.772 14:25:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.030 14:25:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.030 "name": "raid_bdev1", 00:23:23.030 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:23.030 "strip_size_kb": 64, 00:23:23.030 "state": "online", 00:23:23.030 "raid_level": "raid5f", 00:23:23.030 "superblock": true, 00:23:23.030 "num_base_bdevs": 4, 00:23:23.030 "num_base_bdevs_discovered": 4, 00:23:23.030 "num_base_bdevs_operational": 4, 00:23:23.030 "process": { 00:23:23.030 "type": "rebuild", 00:23:23.030 "target": "spare", 00:23:23.030 "progress": { 00:23:23.030 "blocks": 53760, 00:23:23.030 "percent": 28 00:23:23.030 } 00:23:23.030 }, 00:23:23.030 "base_bdevs_list": [ 00:23:23.030 { 00:23:23.030 "name": "spare", 00:23:23.030 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:23.030 "is_configured": true, 00:23:23.030 "data_offset": 2048, 00:23:23.030 "data_size": 63488 00:23:23.030 }, 00:23:23.030 { 00:23:23.030 "name": "BaseBdev2", 00:23:23.030 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:23.030 "is_configured": true, 00:23:23.030 "data_offset": 2048, 00:23:23.030 "data_size": 63488 00:23:23.030 }, 00:23:23.030 { 00:23:23.030 "name": "BaseBdev3", 00:23:23.030 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:23.030 "is_configured": true, 00:23:23.030 "data_offset": 2048, 00:23:23.030 "data_size": 63488 00:23:23.030 }, 00:23:23.030 { 00:23:23.030 "name": "BaseBdev4", 00:23:23.030 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:23.030 "is_configured": true, 00:23:23.030 "data_offset": 2048, 00:23:23.030 "data_size": 63488 00:23:23.030 } 00:23:23.030 ] 00:23:23.030 }' 00:23:23.030 14:25:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.030 14:25:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.030 14:25:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.289 14:25:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.289 14:25:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.225 14:25:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.484 14:25:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.484 "name": "raid_bdev1", 00:23:24.484 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:24.484 "strip_size_kb": 64, 00:23:24.484 "state": "online", 00:23:24.484 "raid_level": "raid5f", 00:23:24.484 "superblock": true, 00:23:24.484 "num_base_bdevs": 4, 00:23:24.484 "num_base_bdevs_discovered": 4, 00:23:24.484 "num_base_bdevs_operational": 4, 00:23:24.484 "process": { 00:23:24.484 "type": "rebuild", 00:23:24.484 "target": "spare", 00:23:24.484 "progress": { 00:23:24.484 "blocks": 80640, 00:23:24.484 "percent": 42 00:23:24.484 } 00:23:24.484 }, 00:23:24.484 "base_bdevs_list": [ 00:23:24.484 { 00:23:24.484 "name": "spare", 00:23:24.484 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:24.484 "is_configured": true, 00:23:24.484 "data_offset": 2048, 00:23:24.484 "data_size": 63488 00:23:24.484 }, 00:23:24.484 { 00:23:24.484 "name": "BaseBdev2", 00:23:24.484 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:24.484 "is_configured": true, 00:23:24.484 "data_offset": 2048, 00:23:24.484 "data_size": 63488 00:23:24.484 }, 00:23:24.484 { 00:23:24.484 "name": "BaseBdev3", 00:23:24.484 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:24.484 "is_configured": true, 00:23:24.484 "data_offset": 2048, 00:23:24.484 "data_size": 63488 00:23:24.484 }, 00:23:24.484 { 00:23:24.484 "name": "BaseBdev4", 00:23:24.484 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:24.484 "is_configured": true, 00:23:24.484 "data_offset": 2048, 00:23:24.484 "data_size": 63488 00:23:24.484 } 00:23:24.484 ] 00:23:24.484 }' 00:23:24.484 14:25:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.484 14:25:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:24.484 14:25:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.484 14:25:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:24.484 14:25:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:25.420 14:25:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.421 14:25:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.679 14:25:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:25.679 "name": "raid_bdev1", 00:23:25.679 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:25.679 "strip_size_kb": 64, 00:23:25.679 "state": "online", 00:23:25.679 "raid_level": "raid5f", 00:23:25.679 "superblock": true, 00:23:25.679 "num_base_bdevs": 4, 00:23:25.679 "num_base_bdevs_discovered": 4, 00:23:25.679 "num_base_bdevs_operational": 4, 00:23:25.679 "process": { 00:23:25.679 "type": "rebuild", 00:23:25.679 "target": "spare", 00:23:25.679 "progress": { 00:23:25.679 "blocks": 105600, 00:23:25.679 "percent": 55 00:23:25.679 } 00:23:25.679 }, 00:23:25.679 "base_bdevs_list": [ 00:23:25.679 { 00:23:25.679 "name": "spare", 00:23:25.679 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:25.679 "is_configured": true, 00:23:25.679 "data_offset": 2048, 00:23:25.679 "data_size": 63488 00:23:25.679 }, 00:23:25.679 { 00:23:25.679 "name": "BaseBdev2", 00:23:25.680 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:25.680 "is_configured": true, 00:23:25.680 "data_offset": 2048, 00:23:25.680 "data_size": 63488 00:23:25.680 }, 00:23:25.680 { 00:23:25.680 "name": "BaseBdev3", 00:23:25.680 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:25.680 "is_configured": true, 00:23:25.680 "data_offset": 2048, 00:23:25.680 "data_size": 63488 00:23:25.680 }, 00:23:25.680 { 00:23:25.680 "name": "BaseBdev4", 00:23:25.680 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:25.680 "is_configured": true, 00:23:25.680 "data_offset": 2048, 00:23:25.680 "data_size": 63488 00:23:25.680 } 00:23:25.680 ] 00:23:25.680 }' 00:23:25.680 14:25:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:25.938 14:25:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:25.938 14:25:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:25.938 14:25:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:25.938 14:25:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.878 14:25:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.137 14:25:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:27.137 "name": "raid_bdev1", 00:23:27.137 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:27.137 "strip_size_kb": 64, 00:23:27.137 "state": "online", 00:23:27.137 "raid_level": "raid5f", 00:23:27.137 "superblock": true, 00:23:27.137 "num_base_bdevs": 4, 00:23:27.137 "num_base_bdevs_discovered": 4, 00:23:27.137 "num_base_bdevs_operational": 4, 00:23:27.137 "process": { 00:23:27.137 "type": "rebuild", 00:23:27.137 "target": "spare", 00:23:27.137 "progress": { 00:23:27.137 "blocks": 130560, 00:23:27.137 "percent": 68 00:23:27.137 } 00:23:27.137 }, 00:23:27.137 "base_bdevs_list": [ 00:23:27.137 { 00:23:27.137 "name": "spare", 00:23:27.137 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:27.137 "is_configured": true, 00:23:27.137 "data_offset": 2048, 00:23:27.137 "data_size": 63488 00:23:27.137 }, 00:23:27.137 { 00:23:27.137 "name": "BaseBdev2", 00:23:27.137 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:27.137 "is_configured": true, 00:23:27.137 "data_offset": 2048, 00:23:27.137 "data_size": 63488 00:23:27.137 }, 00:23:27.137 { 00:23:27.137 "name": "BaseBdev3", 00:23:27.137 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:27.137 "is_configured": true, 00:23:27.137 "data_offset": 2048, 00:23:27.137 "data_size": 63488 00:23:27.137 }, 00:23:27.137 { 00:23:27.137 "name": "BaseBdev4", 00:23:27.137 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:27.137 "is_configured": true, 00:23:27.137 "data_offset": 2048, 00:23:27.137 "data_size": 63488 00:23:27.137 } 00:23:27.137 ] 00:23:27.137 }' 00:23:27.137 14:25:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:27.137 14:25:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:27.137 14:25:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:27.137 14:25:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:27.137 14:25:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:28.514 "name": "raid_bdev1", 00:23:28.514 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:28.514 "strip_size_kb": 64, 00:23:28.514 "state": "online", 00:23:28.514 "raid_level": "raid5f", 00:23:28.514 "superblock": true, 00:23:28.514 "num_base_bdevs": 4, 00:23:28.514 "num_base_bdevs_discovered": 4, 00:23:28.514 "num_base_bdevs_operational": 4, 00:23:28.514 "process": { 00:23:28.514 "type": "rebuild", 00:23:28.514 "target": "spare", 00:23:28.514 "progress": { 00:23:28.514 "blocks": 157440, 00:23:28.514 "percent": 82 00:23:28.514 } 00:23:28.514 }, 00:23:28.514 "base_bdevs_list": [ 00:23:28.514 { 00:23:28.514 "name": "spare", 00:23:28.514 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:28.514 "is_configured": true, 00:23:28.514 "data_offset": 2048, 00:23:28.514 "data_size": 63488 00:23:28.514 }, 00:23:28.514 { 00:23:28.514 "name": "BaseBdev2", 00:23:28.514 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:28.514 "is_configured": true, 00:23:28.514 "data_offset": 2048, 00:23:28.514 "data_size": 63488 00:23:28.514 }, 00:23:28.514 { 00:23:28.514 "name": "BaseBdev3", 00:23:28.514 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:28.514 "is_configured": true, 00:23:28.514 "data_offset": 2048, 00:23:28.514 "data_size": 63488 00:23:28.514 }, 00:23:28.514 { 00:23:28.514 "name": "BaseBdev4", 00:23:28.514 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:28.514 "is_configured": true, 00:23:28.514 "data_offset": 2048, 00:23:28.514 "data_size": 63488 00:23:28.514 } 00:23:28.514 ] 00:23:28.514 }' 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.514 14:25:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:29.895 "name": "raid_bdev1", 00:23:29.895 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:29.895 "strip_size_kb": 64, 00:23:29.895 "state": "online", 00:23:29.895 "raid_level": "raid5f", 00:23:29.895 "superblock": true, 00:23:29.895 "num_base_bdevs": 4, 00:23:29.895 "num_base_bdevs_discovered": 4, 00:23:29.895 "num_base_bdevs_operational": 4, 00:23:29.895 "process": { 00:23:29.895 "type": "rebuild", 00:23:29.895 "target": "spare", 00:23:29.895 "progress": { 00:23:29.895 "blocks": 182400, 00:23:29.895 "percent": 95 00:23:29.895 } 00:23:29.895 }, 00:23:29.895 "base_bdevs_list": [ 00:23:29.895 { 00:23:29.895 "name": "spare", 00:23:29.895 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:29.895 "is_configured": true, 00:23:29.895 "data_offset": 2048, 00:23:29.895 "data_size": 63488 00:23:29.895 }, 00:23:29.895 { 00:23:29.895 "name": "BaseBdev2", 00:23:29.895 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:29.895 "is_configured": true, 00:23:29.895 "data_offset": 2048, 00:23:29.895 "data_size": 63488 00:23:29.895 }, 00:23:29.895 { 00:23:29.895 "name": "BaseBdev3", 00:23:29.895 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:29.895 "is_configured": true, 00:23:29.895 "data_offset": 2048, 00:23:29.895 "data_size": 63488 00:23:29.895 }, 00:23:29.895 { 00:23:29.895 "name": "BaseBdev4", 00:23:29.895 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:29.895 "is_configured": true, 00:23:29.895 "data_offset": 2048, 00:23:29.895 "data_size": 63488 00:23:29.895 } 00:23:29.895 ] 00:23:29.895 }' 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:29.895 14:25:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:30.154 [2024-11-18 14:25:22.168421] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:30.154 [2024-11-18 14:25:22.168648] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:30.154 [2024-11-18 14:25:22.168950] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.089 14:25:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.090 14:25:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.090 14:25:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.090 "name": "raid_bdev1", 00:23:31.090 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:31.090 "strip_size_kb": 64, 00:23:31.090 "state": "online", 00:23:31.090 "raid_level": "raid5f", 00:23:31.090 "superblock": true, 00:23:31.090 "num_base_bdevs": 4, 00:23:31.090 "num_base_bdevs_discovered": 4, 00:23:31.090 "num_base_bdevs_operational": 4, 00:23:31.090 "base_bdevs_list": [ 00:23:31.090 { 00:23:31.090 "name": "spare", 00:23:31.090 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:31.090 "is_configured": true, 00:23:31.090 "data_offset": 2048, 00:23:31.090 "data_size": 63488 00:23:31.090 }, 00:23:31.090 { 00:23:31.090 "name": "BaseBdev2", 00:23:31.090 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:31.090 "is_configured": true, 00:23:31.090 "data_offset": 2048, 00:23:31.090 "data_size": 63488 00:23:31.090 }, 00:23:31.090 { 00:23:31.090 "name": "BaseBdev3", 00:23:31.090 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:31.090 "is_configured": true, 00:23:31.090 "data_offset": 2048, 00:23:31.090 "data_size": 63488 00:23:31.090 }, 00:23:31.090 { 00:23:31.090 "name": "BaseBdev4", 00:23:31.090 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:31.090 "is_configured": true, 00:23:31.090 "data_offset": 2048, 00:23:31.090 "data_size": 63488 00:23:31.090 } 00:23:31.090 ] 00:23:31.090 }' 00:23:31.090 14:25:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.090 14:25:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:31.090 14:25:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@660 -- # break 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.349 14:25:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:31.608 "name": "raid_bdev1", 00:23:31.608 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:31.608 "strip_size_kb": 64, 00:23:31.608 "state": "online", 00:23:31.608 "raid_level": "raid5f", 00:23:31.608 "superblock": true, 00:23:31.608 "num_base_bdevs": 4, 00:23:31.608 "num_base_bdevs_discovered": 4, 00:23:31.608 "num_base_bdevs_operational": 4, 00:23:31.608 "base_bdevs_list": [ 00:23:31.608 { 00:23:31.608 "name": "spare", 00:23:31.608 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:31.608 "is_configured": true, 00:23:31.608 "data_offset": 2048, 00:23:31.608 "data_size": 63488 00:23:31.608 }, 00:23:31.608 { 00:23:31.608 "name": "BaseBdev2", 00:23:31.608 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:31.608 "is_configured": true, 00:23:31.608 "data_offset": 2048, 00:23:31.608 "data_size": 63488 00:23:31.608 }, 00:23:31.608 { 00:23:31.608 "name": "BaseBdev3", 00:23:31.608 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:31.608 "is_configured": true, 00:23:31.608 "data_offset": 2048, 00:23:31.608 "data_size": 63488 00:23:31.608 }, 00:23:31.608 { 00:23:31.608 "name": "BaseBdev4", 00:23:31.608 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:31.608 "is_configured": true, 00:23:31.608 "data_offset": 2048, 00:23:31.608 "data_size": 63488 00:23:31.608 } 00:23:31.608 ] 00:23:31.608 }' 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.608 14:25:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.609 14:25:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.609 14:25:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.867 14:25:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.867 "name": "raid_bdev1", 00:23:31.867 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:31.867 "strip_size_kb": 64, 00:23:31.867 "state": "online", 00:23:31.867 "raid_level": "raid5f", 00:23:31.867 "superblock": true, 00:23:31.867 "num_base_bdevs": 4, 00:23:31.867 "num_base_bdevs_discovered": 4, 00:23:31.867 "num_base_bdevs_operational": 4, 00:23:31.867 "base_bdevs_list": [ 00:23:31.867 { 00:23:31.867 "name": "spare", 00:23:31.867 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:31.867 "is_configured": true, 00:23:31.867 "data_offset": 2048, 00:23:31.867 "data_size": 63488 00:23:31.867 }, 00:23:31.867 { 00:23:31.867 "name": "BaseBdev2", 00:23:31.867 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:31.867 "is_configured": true, 00:23:31.867 "data_offset": 2048, 00:23:31.867 "data_size": 63488 00:23:31.867 }, 00:23:31.867 { 00:23:31.867 "name": "BaseBdev3", 00:23:31.867 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:31.867 "is_configured": true, 00:23:31.867 "data_offset": 2048, 00:23:31.867 "data_size": 63488 00:23:31.867 }, 00:23:31.867 { 00:23:31.867 "name": "BaseBdev4", 00:23:31.867 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:31.867 "is_configured": true, 00:23:31.867 "data_offset": 2048, 00:23:31.867 "data_size": 63488 00:23:31.868 } 00:23:31.868 ] 00:23:31.868 }' 00:23:31.868 14:25:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.868 14:25:23 -- common/autotest_common.sh@10 -- # set +x 00:23:32.435 14:25:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:32.694 [2024-11-18 14:25:24.568885] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.694 [2024-11-18 14:25:24.569050] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.694 [2024-11-18 14:25:24.569256] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.694 [2024-11-18 14:25:24.569472] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.694 [2024-11-18 14:25:24.569591] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:23:32.694 14:25:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.694 14:25:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:32.952 14:25:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:32.952 14:25:24 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:32.952 14:25:24 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@12 -- # local i 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:32.952 14:25:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:32.952 /dev/nbd0 00:23:33.211 14:25:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:33.211 14:25:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:33.211 14:25:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:33.212 14:25:25 -- common/autotest_common.sh@867 -- # local i 00:23:33.212 14:25:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:33.212 14:25:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:33.212 14:25:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:33.212 14:25:25 -- common/autotest_common.sh@871 -- # break 00:23:33.212 14:25:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:33.212 14:25:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:33.212 14:25:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:33.212 1+0 records in 00:23:33.212 1+0 records out 00:23:33.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522731 s, 7.8 MB/s 00:23:33.212 14:25:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.212 14:25:25 -- common/autotest_common.sh@884 -- # size=4096 00:23:33.212 14:25:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.212 14:25:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:33.212 14:25:25 -- common/autotest_common.sh@887 -- # return 0 00:23:33.212 14:25:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:33.212 14:25:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:33.212 14:25:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:33.212 /dev/nbd1 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:33.471 14:25:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:33.471 14:25:25 -- common/autotest_common.sh@867 -- # local i 00:23:33.471 14:25:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:33.471 14:25:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:33.471 14:25:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:33.471 14:25:25 -- common/autotest_common.sh@871 -- # break 00:23:33.471 14:25:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:33.471 14:25:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:33.471 14:25:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:33.471 1+0 records in 00:23:33.471 1+0 records out 00:23:33.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670837 s, 6.1 MB/s 00:23:33.471 14:25:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.471 14:25:25 -- common/autotest_common.sh@884 -- # size=4096 00:23:33.471 14:25:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.471 14:25:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:33.471 14:25:25 -- common/autotest_common.sh@887 -- # return 0 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:33.471 14:25:25 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:33.471 14:25:25 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@51 -- # local i 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.471 14:25:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@41 -- # break 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.731 14:25:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:33.990 14:25:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:33.990 14:25:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:33.990 14:25:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:33.991 14:25:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.991 14:25:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.991 14:25:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:33.991 14:25:25 -- bdev/nbd_common.sh@41 -- # break 00:23:33.991 14:25:25 -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.991 14:25:25 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:33.991 14:25:25 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:33.991 14:25:25 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:33.991 14:25:25 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:34.250 14:25:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:34.509 [2024-11-18 14:25:26.336489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:34.509 [2024-11-18 14:25:26.336706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.509 [2024-11-18 14:25:26.336789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:34.509 [2024-11-18 14:25:26.337038] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.509 [2024-11-18 14:25:26.339743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.509 [2024-11-18 14:25:26.339932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:34.509 [2024-11-18 14:25:26.340135] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:34.509 [2024-11-18 14:25:26.340287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:34.509 BaseBdev1 00:23:34.509 14:25:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:34.509 14:25:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:23:34.509 14:25:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:23:34.768 14:25:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:34.768 [2024-11-18 14:25:26.761037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:34.768 [2024-11-18 14:25:26.761234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.768 [2024-11-18 14:25:26.761311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:34.768 [2024-11-18 14:25:26.761572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.768 [2024-11-18 14:25:26.761963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.768 [2024-11-18 14:25:26.762150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:34.768 [2024-11-18 14:25:26.762333] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:23:34.768 [2024-11-18 14:25:26.762461] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:23:34.768 [2024-11-18 14:25:26.762557] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:34.768 [2024-11-18 14:25:26.762625] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:23:34.768 [2024-11-18 14:25:26.762768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.768 BaseBdev2 00:23:34.768 14:25:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:34.768 14:25:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:34.768 14:25:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:35.027 14:25:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:35.286 [2024-11-18 14:25:27.173101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:35.286 [2024-11-18 14:25:27.173327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.286 [2024-11-18 14:25:27.173396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:35.286 [2024-11-18 14:25:27.173538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.286 [2024-11-18 14:25:27.173920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.286 [2024-11-18 14:25:27.174081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:35.286 [2024-11-18 14:25:27.174312] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:35.286 [2024-11-18 14:25:27.174443] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:35.286 BaseBdev3 00:23:35.286 14:25:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:35.286 14:25:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:35.286 14:25:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:35.545 14:25:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:35.804 [2024-11-18 14:25:27.681241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:35.804 [2024-11-18 14:25:27.681455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.804 [2024-11-18 14:25:27.681528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:35.804 [2024-11-18 14:25:27.681786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.804 [2024-11-18 14:25:27.682222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.804 [2024-11-18 14:25:27.682414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:35.804 [2024-11-18 14:25:27.682587] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:35.804 [2024-11-18 14:25:27.682702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:35.804 BaseBdev4 00:23:35.804 14:25:27 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:36.063 14:25:27 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:36.063 [2024-11-18 14:25:28.065257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:36.063 [2024-11-18 14:25:28.065442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.063 [2024-11-18 14:25:28.065511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:36.063 [2024-11-18 14:25:28.065783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.063 [2024-11-18 14:25:28.066190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.063 [2024-11-18 14:25:28.066400] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:36.063 [2024-11-18 14:25:28.066576] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:36.063 [2024-11-18 14:25:28.066713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:36.063 spare 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.064 14:25:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.322 [2024-11-18 14:25:28.166864] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:23:36.322 [2024-11-18 14:25:28.167000] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:36.322 [2024-11-18 14:25:28.167161] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045ea0 00:23:36.322 [2024-11-18 14:25:28.168010] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:23:36.322 [2024-11-18 14:25:28.168135] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:23:36.322 [2024-11-18 14:25:28.168382] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.322 14:25:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:36.322 "name": "raid_bdev1", 00:23:36.322 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:36.322 "strip_size_kb": 64, 00:23:36.322 "state": "online", 00:23:36.322 "raid_level": "raid5f", 00:23:36.322 "superblock": true, 00:23:36.323 "num_base_bdevs": 4, 00:23:36.323 "num_base_bdevs_discovered": 4, 00:23:36.323 "num_base_bdevs_operational": 4, 00:23:36.323 "base_bdevs_list": [ 00:23:36.323 { 00:23:36.323 "name": "spare", 00:23:36.323 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:36.323 "is_configured": true, 00:23:36.323 "data_offset": 2048, 00:23:36.323 "data_size": 63488 00:23:36.323 }, 00:23:36.323 { 00:23:36.323 "name": "BaseBdev2", 00:23:36.323 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:36.323 "is_configured": true, 00:23:36.323 "data_offset": 2048, 00:23:36.323 "data_size": 63488 00:23:36.323 }, 00:23:36.323 { 00:23:36.323 "name": "BaseBdev3", 00:23:36.323 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:36.323 "is_configured": true, 00:23:36.323 "data_offset": 2048, 00:23:36.323 "data_size": 63488 00:23:36.323 }, 00:23:36.323 { 00:23:36.323 "name": "BaseBdev4", 00:23:36.323 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:36.323 "is_configured": true, 00:23:36.323 "data_offset": 2048, 00:23:36.323 "data_size": 63488 00:23:36.323 } 00:23:36.323 ] 00:23:36.323 }' 00:23:36.323 14:25:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:36.323 14:25:28 -- common/autotest_common.sh@10 -- # set +x 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.261 14:25:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:37.261 "name": "raid_bdev1", 00:23:37.261 "uuid": "e4c7b7ea-ba69-4ea1-abf2-e60ac9943aa6", 00:23:37.261 "strip_size_kb": 64, 00:23:37.261 "state": "online", 00:23:37.261 "raid_level": "raid5f", 00:23:37.261 "superblock": true, 00:23:37.261 "num_base_bdevs": 4, 00:23:37.261 "num_base_bdevs_discovered": 4, 00:23:37.261 "num_base_bdevs_operational": 4, 00:23:37.261 "base_bdevs_list": [ 00:23:37.261 { 00:23:37.261 "name": "spare", 00:23:37.261 "uuid": "862d2c52-7e9d-5e7e-b8f8-343827f10a9e", 00:23:37.261 "is_configured": true, 00:23:37.261 "data_offset": 2048, 00:23:37.261 "data_size": 63488 00:23:37.261 }, 00:23:37.261 { 00:23:37.261 "name": "BaseBdev2", 00:23:37.261 "uuid": "a466a8ff-779f-5184-88b0-4f8d434a4986", 00:23:37.261 "is_configured": true, 00:23:37.261 "data_offset": 2048, 00:23:37.261 "data_size": 63488 00:23:37.261 }, 00:23:37.261 { 00:23:37.261 "name": "BaseBdev3", 00:23:37.261 "uuid": "8f4c0dcc-b8f2-5627-a699-d0fcb45163d2", 00:23:37.261 "is_configured": true, 00:23:37.261 "data_offset": 2048, 00:23:37.261 "data_size": 63488 00:23:37.261 }, 00:23:37.261 { 00:23:37.261 "name": "BaseBdev4", 00:23:37.261 "uuid": "d95357f1-4f6e-502c-951e-6a0d04383eeb", 00:23:37.261 "is_configured": true, 00:23:37.261 "data_offset": 2048, 00:23:37.261 "data_size": 63488 00:23:37.261 } 00:23:37.261 ] 00:23:37.261 }' 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.261 14:25:29 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:37.520 14:25:29 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.520 14:25:29 -- bdev/bdev_raid.sh@709 -- # killprocess 141387 00:23:37.520 14:25:29 -- common/autotest_common.sh@936 -- # '[' -z 141387 ']' 00:23:37.520 14:25:29 -- common/autotest_common.sh@940 -- # kill -0 141387 00:23:37.520 14:25:29 -- common/autotest_common.sh@941 -- # uname 00:23:37.520 14:25:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.520 14:25:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141387 00:23:37.520 killing process with pid 141387 00:23:37.520 Received shutdown signal, test time was about 60.000000 seconds 00:23:37.520 00:23:37.520 Latency(us) 00:23:37.520 [2024-11-18T14:25:29.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.520 [2024-11-18T14:25:29.594Z] =================================================================================================================== 00:23:37.520 [2024-11-18T14:25:29.594Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:37.520 14:25:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:37.520 14:25:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:37.520 14:25:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141387' 00:23:37.520 14:25:29 -- common/autotest_common.sh@955 -- # kill 141387 00:23:37.520 14:25:29 -- common/autotest_common.sh@960 -- # wait 141387 00:23:37.520 [2024-11-18 14:25:29.558189] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:37.521 [2024-11-18 14:25:29.558307] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:37.521 [2024-11-18 14:25:29.558414] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:37.521 [2024-11-18 14:25:29.558426] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:23:37.779 [2024-11-18 14:25:29.603467] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:37.779 14:25:29 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:37.779 00:23:37.779 real 0m27.656s 00:23:37.779 user 0m42.481s 00:23:37.779 sys 0m3.251s 00:23:37.779 14:25:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:37.779 14:25:29 -- common/autotest_common.sh@10 -- # set +x 00:23:37.779 ************************************ 00:23:37.779 END TEST raid5f_rebuild_test_sb 00:23:37.779 ************************************ 00:23:38.049 14:25:29 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:23:38.049 ************************************ 00:23:38.049 END TEST bdev_raid 00:23:38.049 ************************************ 00:23:38.049 00:23:38.049 real 10m55.472s 00:23:38.049 user 18m36.657s 00:23:38.049 sys 1m24.924s 00:23:38.049 14:25:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:38.049 14:25:29 -- common/autotest_common.sh@10 -- # set +x 00:23:38.049 14:25:29 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:23:38.049 14:25:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:38.049 14:25:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:38.049 14:25:29 -- common/autotest_common.sh@10 -- # set +x 00:23:38.049 ************************************ 00:23:38.049 START TEST bdevperf_config 00:23:38.049 ************************************ 00:23:38.049 14:25:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:23:38.049 * Looking for test storage... 00:23:38.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:23:38.049 14:25:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:38.049 14:25:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:38.049 14:25:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:38.049 14:25:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:38.049 14:25:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:38.049 14:25:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:38.049 14:25:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:38.049 14:25:30 -- scripts/common.sh@335 -- # IFS=.-: 00:23:38.049 14:25:30 -- scripts/common.sh@335 -- # read -ra ver1 00:23:38.049 14:25:30 -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.049 14:25:30 -- scripts/common.sh@336 -- # read -ra ver2 00:23:38.049 14:25:30 -- scripts/common.sh@337 -- # local 'op=<' 00:23:38.049 14:25:30 -- scripts/common.sh@339 -- # ver1_l=2 00:23:38.049 14:25:30 -- scripts/common.sh@340 -- # ver2_l=1 00:23:38.049 14:25:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:38.049 14:25:30 -- scripts/common.sh@343 -- # case "$op" in 00:23:38.049 14:25:30 -- scripts/common.sh@344 -- # : 1 00:23:38.049 14:25:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:38.049 14:25:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.049 14:25:30 -- scripts/common.sh@364 -- # decimal 1 00:23:38.049 14:25:30 -- scripts/common.sh@352 -- # local d=1 00:23:38.049 14:25:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.049 14:25:30 -- scripts/common.sh@354 -- # echo 1 00:23:38.049 14:25:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:38.049 14:25:30 -- scripts/common.sh@365 -- # decimal 2 00:23:38.049 14:25:30 -- scripts/common.sh@352 -- # local d=2 00:23:38.049 14:25:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.049 14:25:30 -- scripts/common.sh@354 -- # echo 2 00:23:38.049 14:25:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:38.049 14:25:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:38.049 14:25:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:38.049 14:25:30 -- scripts/common.sh@367 -- # return 0 00:23:38.049 14:25:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.049 14:25:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.049 --rc genhtml_branch_coverage=1 00:23:38.049 --rc genhtml_function_coverage=1 00:23:38.049 --rc genhtml_legend=1 00:23:38.049 --rc geninfo_all_blocks=1 00:23:38.049 --rc geninfo_unexecuted_blocks=1 00:23:38.049 00:23:38.049 ' 00:23:38.049 14:25:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.049 --rc genhtml_branch_coverage=1 00:23:38.049 --rc genhtml_function_coverage=1 00:23:38.049 --rc genhtml_legend=1 00:23:38.049 --rc geninfo_all_blocks=1 00:23:38.049 --rc geninfo_unexecuted_blocks=1 00:23:38.049 00:23:38.049 ' 00:23:38.049 14:25:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.049 --rc genhtml_branch_coverage=1 00:23:38.049 --rc genhtml_function_coverage=1 00:23:38.049 --rc genhtml_legend=1 00:23:38.049 --rc geninfo_all_blocks=1 00:23:38.049 --rc geninfo_unexecuted_blocks=1 00:23:38.049 00:23:38.049 ' 00:23:38.049 14:25:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:38.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.049 --rc genhtml_branch_coverage=1 00:23:38.049 --rc genhtml_function_coverage=1 00:23:38.049 --rc genhtml_legend=1 00:23:38.049 --rc geninfo_all_blocks=1 00:23:38.049 --rc geninfo_unexecuted_blocks=1 00:23:38.049 00:23:38.049 ' 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:23:38.049 14:25:30 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:23:38.049 14:25:30 -- bdevperf/common.sh@8 -- # local job_section=global 00:23:38.049 14:25:30 -- bdevperf/common.sh@9 -- # local rw=read 00:23:38.049 14:25:30 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:38.049 14:25:30 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:38.049 14:25:30 -- bdevperf/common.sh@13 -- # cat 00:23:38.049 14:25:30 -- bdevperf/common.sh@18 -- # job='[global]' 00:23:38.049 14:25:30 -- bdevperf/common.sh@19 -- # echo 00:23:38.049 00:23:38.049 14:25:30 -- bdevperf/common.sh@20 -- # cat 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@18 -- # create_job job0 00:23:38.049 14:25:30 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:38.049 14:25:30 -- bdevperf/common.sh@9 -- # local rw= 00:23:38.049 14:25:30 -- bdevperf/common.sh@10 -- # local filename= 00:23:38.049 14:25:30 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:38.049 14:25:30 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:38.049 14:25:30 -- bdevperf/common.sh@19 -- # echo 00:23:38.049 00:23:38.049 14:25:30 -- bdevperf/common.sh@20 -- # cat 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@19 -- # create_job job1 00:23:38.049 14:25:30 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:38.049 14:25:30 -- bdevperf/common.sh@9 -- # local rw= 00:23:38.049 14:25:30 -- bdevperf/common.sh@10 -- # local filename= 00:23:38.049 14:25:30 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:38.049 14:25:30 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:38.049 14:25:30 -- bdevperf/common.sh@19 -- # echo 00:23:38.049 00:23:38.049 14:25:30 -- bdevperf/common.sh@20 -- # cat 00:23:38.049 14:25:30 -- bdevperf/test_config.sh@20 -- # create_job job2 00:23:38.049 14:25:30 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:38.049 14:25:30 -- bdevperf/common.sh@9 -- # local rw= 00:23:38.049 14:25:30 -- bdevperf/common.sh@10 -- # local filename= 00:23:38.049 14:25:30 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:38.049 14:25:30 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:38.049 14:25:30 -- bdevperf/common.sh@19 -- # echo 00:23:38.049 00:23:38.049 14:25:30 -- bdevperf/common.sh@20 -- # cat 00:23:38.345 14:25:30 -- bdevperf/test_config.sh@21 -- # create_job job3 00:23:38.345 14:25:30 -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:38.345 14:25:30 -- bdevperf/common.sh@9 -- # local rw= 00:23:38.345 14:25:30 -- bdevperf/common.sh@10 -- # local filename= 00:23:38.345 14:25:30 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:38.345 14:25:30 -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:38.345 14:25:30 -- bdevperf/common.sh@19 -- # echo 00:23:38.345 00:23:38.345 14:25:30 -- bdevperf/common.sh@20 -- # cat 00:23:38.345 14:25:30 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:40.895 14:25:32 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-18 14:25:30.169656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:40.895 [2024-11-18 14:25:30.169911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142139 ] 00:23:40.895 Using job config with 4 jobs 00:23:40.895 [2024-11-18 14:25:30.316894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.895 [2024-11-18 14:25:30.402305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.895 cpumask for '\''job0'\'' is too big 00:23:40.895 cpumask for '\''job1'\'' is too big 00:23:40.895 cpumask for '\''job2'\'' is too big 00:23:40.895 cpumask for '\''job3'\'' is too big 00:23:40.895 Running I/O for 2 seconds... 00:23:40.895 00:23:40.895 Latency(us) 00:23:40.895 [2024-11-18T14:25:32.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.895 [2024-11-18T14:25:32.969Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.895 Malloc0 : 2.01 31437.98 30.70 0.00 0.00 8133.97 1504.35 16801.05 00:23:40.895 [2024-11-18T14:25:32.969Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.895 Malloc0 : 2.01 31416.94 30.68 0.00 0.00 8125.12 1414.98 16920.20 00:23:40.895 [2024-11-18T14:25:32.969Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.895 Malloc0 : 2.02 31462.61 30.73 0.00 0.00 8099.95 1429.88 18111.77 00:23:40.895 [2024-11-18T14:25:32.969Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.895 Malloc0 : 2.02 31442.30 30.71 0.00 0.00 8090.95 1407.53 18230.92 00:23:40.895 [2024-11-18T14:25:32.969Z] =================================================================================================================== 00:23:40.895 [2024-11-18T14:25:32.969Z] Total : 125759.83 122.81 0.00 0.00 8112.46 1407.53 18230.92' 00:23:40.895 14:25:32 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-18 14:25:30.169656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:40.895 [2024-11-18 14:25:30.169911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142139 ] 00:23:40.895 Using job config with 4 jobs 00:23:40.895 [2024-11-18 14:25:30.316894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.895 [2024-11-18 14:25:30.402305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.895 cpumask for '\''job0'\'' is too big 00:23:40.895 cpumask for '\''job1'\'' is too big 00:23:40.895 cpumask for '\''job2'\'' is too big 00:23:40.895 cpumask for '\''job3'\'' is too big 00:23:40.895 Running I/O for 2 seconds... 00:23:40.895 00:23:40.895 Latency(us) 00:23:40.895 [2024-11-18T14:25:32.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.895 [2024-11-18T14:25:32.969Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.895 Malloc0 : 2.01 31437.98 30.70 0.00 0.00 8133.97 1504.35 16801.05 00:23:40.895 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.01 31416.94 30.68 0.00 0.00 8125.12 1414.98 16920.20 00:23:40.896 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.02 31462.61 30.73 0.00 0.00 8099.95 1429.88 18111.77 00:23:40.896 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.02 31442.30 30.71 0.00 0.00 8090.95 1407.53 18230.92 00:23:40.896 [2024-11-18T14:25:32.970Z] =================================================================================================================== 00:23:40.896 [2024-11-18T14:25:32.970Z] Total : 125759.83 122.81 0.00 0.00 8112.46 1407.53 18230.92' 00:23:40.896 14:25:32 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 14:25:30.169656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:40.896 [2024-11-18 14:25:30.169911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142139 ] 00:23:40.896 Using job config with 4 jobs 00:23:40.896 [2024-11-18 14:25:30.316894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.896 [2024-11-18 14:25:30.402305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.896 cpumask for '\''job0'\'' is too big 00:23:40.896 cpumask for '\''job1'\'' is too big 00:23:40.896 cpumask for '\''job2'\'' is too big 00:23:40.896 cpumask for '\''job3'\'' is too big 00:23:40.896 Running I/O for 2 seconds... 00:23:40.896 00:23:40.896 Latency(us) 00:23:40.896 [2024-11-18T14:25:32.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.896 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.01 31437.98 30.70 0.00 0.00 8133.97 1504.35 16801.05 00:23:40.896 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.01 31416.94 30.68 0.00 0.00 8125.12 1414.98 16920.20 00:23:40.896 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.02 31462.61 30.73 0.00 0.00 8099.95 1429.88 18111.77 00:23:40.896 [2024-11-18T14:25:32.970Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.896 Malloc0 : 2.02 31442.30 30.71 0.00 0.00 8090.95 1407.53 18230.92 00:23:40.896 [2024-11-18T14:25:32.970Z] =================================================================================================================== 00:23:40.896 [2024-11-18T14:25:32.970Z] Total : 125759.83 122.81 0.00 0.00 8112.46 1407.53 18230.92' 00:23:40.896 14:25:32 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:40.896 14:25:32 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:40.896 14:25:32 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:40.896 14:25:32 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:40.896 [2024-11-18 14:25:32.927173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:40.896 [2024-11-18 14:25:32.927635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142182 ] 00:23:41.155 [2024-11-18 14:25:33.072494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.155 [2024-11-18 14:25:33.173143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.414 cpumask for 'job0' is too big 00:23:41.414 cpumask for 'job1' is too big 00:23:41.414 cpumask for 'job2' is too big 00:23:41.414 cpumask for 'job3' is too big 00:23:43.948 14:25:35 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:23:43.948 Running I/O for 2 seconds... 00:23:43.948 00:23:43.948 Latency(us) 00:23:43.948 [2024-11-18T14:25:36.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.948 [2024-11-18T14:25:36.022Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:43.948 Malloc0 : 2.01 32473.98 31.71 0.00 0.00 7879.86 1541.59 13166.78 00:23:43.948 [2024-11-18T14:25:36.022Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:43.948 Malloc0 : 2.02 32477.38 31.72 0.00 0.00 7864.01 1414.98 11558.17 00:23:43.948 [2024-11-18T14:25:36.022Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:43.948 Malloc0 : 2.02 32454.06 31.69 0.00 0.00 7855.47 1437.32 10247.45 00:23:43.948 [2024-11-18T14:25:36.022Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:43.948 Malloc0 : 2.02 32433.18 31.67 0.00 0.00 7847.38 1422.43 10247.45 00:23:43.948 [2024-11-18T14:25:36.022Z] =================================================================================================================== 00:23:43.948 [2024-11-18T14:25:36.022Z] Total : 129838.58 126.80 0.00 0.00 7861.66 1414.98 13166.78' 00:23:43.948 14:25:35 -- bdevperf/test_config.sh@27 -- # cleanup 00:23:43.948 14:25:35 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:43.948 00:23:43.948 14:25:35 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:23:43.948 14:25:35 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:43.948 14:25:35 -- bdevperf/common.sh@9 -- # local rw=write 00:23:43.948 14:25:35 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:43.948 14:25:35 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:43.948 14:25:35 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:43.948 14:25:35 -- bdevperf/common.sh@19 -- # echo 00:23:43.948 14:25:35 -- bdevperf/common.sh@20 -- # cat 00:23:43.948 00:23:43.948 14:25:35 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:23:43.948 14:25:35 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:43.948 14:25:35 -- bdevperf/common.sh@9 -- # local rw=write 00:23:43.948 14:25:35 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:43.948 14:25:35 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:43.948 14:25:35 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:43.948 14:25:35 -- bdevperf/common.sh@19 -- # echo 00:23:43.948 14:25:35 -- bdevperf/common.sh@20 -- # cat 00:23:43.948 14:25:35 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:23:43.948 14:25:35 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:43.948 14:25:35 -- bdevperf/common.sh@9 -- # local rw=write 00:23:43.948 14:25:35 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:43.948 14:25:35 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:43.948 14:25:35 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:43.948 14:25:35 -- bdevperf/common.sh@19 -- # echo 00:23:43.948 00:23:43.948 14:25:35 -- bdevperf/common.sh@20 -- # cat 00:23:43.948 14:25:35 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:46.483 14:25:38 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-18 14:25:35.710596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:46.483 [2024-11-18 14:25:35.711299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142221 ] 00:23:46.483 Using job config with 3 jobs 00:23:46.483 [2024-11-18 14:25:35.860605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.483 [2024-11-18 14:25:35.943818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.483 cpumask for '\''job0'\'' is too big 00:23:46.483 cpumask for '\''job1'\'' is too big 00:23:46.483 cpumask for '\''job2'\'' is too big 00:23:46.483 Running I/O for 2 seconds... 00:23:46.483 00:23:46.483 Latency(us) 00:23:46.483 [2024-11-18T14:25:38.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44171.07 43.14 0.00 0.00 5790.01 1467.11 8698.41 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44141.70 43.11 0.00 0.00 5784.03 1444.77 7447.27 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44111.96 43.08 0.00 0.00 5778.29 1400.09 7387.69 00:23:46.483 [2024-11-18T14:25:38.557Z] =================================================================================================================== 00:23:46.483 [2024-11-18T14:25:38.557Z] Total : 132424.73 129.32 0.00 0.00 5784.11 1400.09 8698.41' 00:23:46.483 14:25:38 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-18 14:25:35.710596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:46.483 [2024-11-18 14:25:35.711299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142221 ] 00:23:46.483 Using job config with 3 jobs 00:23:46.483 [2024-11-18 14:25:35.860605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.483 [2024-11-18 14:25:35.943818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.483 cpumask for '\''job0'\'' is too big 00:23:46.483 cpumask for '\''job1'\'' is too big 00:23:46.483 cpumask for '\''job2'\'' is too big 00:23:46.483 Running I/O for 2 seconds... 00:23:46.483 00:23:46.483 Latency(us) 00:23:46.483 [2024-11-18T14:25:38.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44171.07 43.14 0.00 0.00 5790.01 1467.11 8698.41 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44141.70 43.11 0.00 0.00 5784.03 1444.77 7447.27 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44111.96 43.08 0.00 0.00 5778.29 1400.09 7387.69 00:23:46.483 [2024-11-18T14:25:38.557Z] =================================================================================================================== 00:23:46.483 [2024-11-18T14:25:38.557Z] Total : 132424.73 129.32 0.00 0.00 5784.11 1400.09 8698.41' 00:23:46.483 14:25:38 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 14:25:35.710596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:46.483 [2024-11-18 14:25:35.711299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142221 ] 00:23:46.483 Using job config with 3 jobs 00:23:46.483 [2024-11-18 14:25:35.860605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.483 [2024-11-18 14:25:35.943818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.483 cpumask for '\''job0'\'' is too big 00:23:46.483 cpumask for '\''job1'\'' is too big 00:23:46.483 cpumask for '\''job2'\'' is too big 00:23:46.483 Running I/O for 2 seconds... 00:23:46.483 00:23:46.483 Latency(us) 00:23:46.483 [2024-11-18T14:25:38.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44171.07 43.14 0.00 0.00 5790.01 1467.11 8698.41 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44141.70 43.11 0.00 0.00 5784.03 1444.77 7447.27 00:23:46.483 [2024-11-18T14:25:38.557Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:46.483 Malloc0 : 2.01 44111.96 43.08 0.00 0.00 5778.29 1400.09 7387.69 00:23:46.483 [2024-11-18T14:25:38.557Z] =================================================================================================================== 00:23:46.483 [2024-11-18T14:25:38.557Z] Total : 132424.73 129.32 0.00 0.00 5784.11 1400.09 8698.41' 00:23:46.483 14:25:38 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:46.483 14:25:38 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:46.483 14:25:38 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:46.483 14:25:38 -- bdevperf/test_config.sh@35 -- # cleanup 00:23:46.484 14:25:38 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:46.484 14:25:38 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:46.484 14:25:38 -- bdevperf/common.sh@8 -- # local job_section=global 00:23:46.484 14:25:38 -- bdevperf/common.sh@9 -- # local rw=rw 00:23:46.484 14:25:38 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:46.484 14:25:38 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:46.484 14:25:38 -- bdevperf/common.sh@13 -- # cat 00:23:46.484 14:25:38 -- bdevperf/common.sh@18 -- # job='[global]' 00:23:46.484 14:25:38 -- bdevperf/common.sh@19 -- # echo 00:23:46.484 00:23:46.484 14:25:38 -- bdevperf/common.sh@20 -- # cat 00:23:46.484 14:25:38 -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:46.484 14:25:38 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:46.484 14:25:38 -- bdevperf/common.sh@9 -- # local rw= 00:23:46.484 14:25:38 -- bdevperf/common.sh@10 -- # local filename= 00:23:46.484 14:25:38 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:46.484 14:25:38 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:46.484 14:25:38 -- bdevperf/common.sh@19 -- # echo 00:23:46.484 00:23:46.484 14:25:38 -- bdevperf/common.sh@20 -- # cat 00:23:46.484 14:25:38 -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:46.484 14:25:38 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:46.484 14:25:38 -- bdevperf/common.sh@9 -- # local rw= 00:23:46.484 14:25:38 -- bdevperf/common.sh@10 -- # local filename= 00:23:46.484 14:25:38 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:46.484 14:25:38 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:46.484 14:25:38 -- bdevperf/common.sh@19 -- # echo 00:23:46.484 00:23:46.484 14:25:38 -- bdevperf/common.sh@20 -- # cat 00:23:46.484 14:25:38 -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:46.484 14:25:38 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:46.484 14:25:38 -- bdevperf/common.sh@9 -- # local rw= 00:23:46.484 14:25:38 -- bdevperf/common.sh@10 -- # local filename= 00:23:46.484 14:25:38 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:46.484 14:25:38 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:46.484 14:25:38 -- bdevperf/common.sh@19 -- # echo 00:23:46.484 00:23:46.484 14:25:38 -- bdevperf/common.sh@20 -- # cat 00:23:46.484 14:25:38 -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:46.484 14:25:38 -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:46.484 14:25:38 -- bdevperf/common.sh@9 -- # local rw= 00:23:46.484 14:25:38 -- bdevperf/common.sh@10 -- # local filename= 00:23:46.484 14:25:38 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:46.484 14:25:38 -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:46.484 14:25:38 -- bdevperf/common.sh@19 -- # echo 00:23:46.484 00:23:46.484 14:25:38 -- bdevperf/common.sh@20 -- # cat 00:23:46.484 14:25:38 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:49.774 14:25:41 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-18 14:25:38.469923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:49.774 [2024-11-18 14:25:38.470154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142274 ] 00:23:49.774 Using job config with 4 jobs 00:23:49.774 [2024-11-18 14:25:38.616251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.774 [2024-11-18 14:25:38.706828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.774 cpumask for '\''job0'\'' is too big 00:23:49.774 cpumask for '\''job1'\'' is too big 00:23:49.774 cpumask for '\''job2'\'' is too big 00:23:49.774 cpumask for '\''job3'\'' is too big 00:23:49.774 Running I/O for 2 seconds... 00:23:49.774 00:23:49.774 Latency(us) 00:23:49.774 [2024-11-18T14:25:41.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.02 16190.32 15.81 0.00 0.00 15817.63 3008.70 24546.21 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16179.56 15.80 0.00 0.00 15815.37 3515.11 24546.21 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.03 16169.06 15.79 0.00 0.00 15785.91 2889.54 21567.30 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16158.44 15.78 0.00 0.00 15783.09 3440.64 21567.30 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.03 16148.15 15.77 0.00 0.00 15751.02 2904.44 18588.39 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16137.42 15.76 0.00 0.00 15747.51 3395.96 18588.39 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.04 16221.00 15.84 0.00 0.00 15626.25 2681.02 18469.24 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.04 16210.41 15.83 0.00 0.00 15626.00 2040.55 18350.08 00:23:49.774 [2024-11-18T14:25:41.848Z] =================================================================================================================== 00:23:49.774 [2024-11-18T14:25:41.848Z] Total : 129414.35 126.38 0.00 0.00 15743.87 2040.55 24546.21' 00:23:49.774 14:25:41 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-18 14:25:38.469923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:49.774 [2024-11-18 14:25:38.470154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142274 ] 00:23:49.774 Using job config with 4 jobs 00:23:49.774 [2024-11-18 14:25:38.616251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.774 [2024-11-18 14:25:38.706828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.774 cpumask for '\''job0'\'' is too big 00:23:49.774 cpumask for '\''job1'\'' is too big 00:23:49.774 cpumask for '\''job2'\'' is too big 00:23:49.774 cpumask for '\''job3'\'' is too big 00:23:49.774 Running I/O for 2 seconds... 00:23:49.774 00:23:49.774 Latency(us) 00:23:49.774 [2024-11-18T14:25:41.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.02 16190.32 15.81 0.00 0.00 15817.63 3008.70 24546.21 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16179.56 15.80 0.00 0.00 15815.37 3515.11 24546.21 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.03 16169.06 15.79 0.00 0.00 15785.91 2889.54 21567.30 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16158.44 15.78 0.00 0.00 15783.09 3440.64 21567.30 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.03 16148.15 15.77 0.00 0.00 15751.02 2904.44 18588.39 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16137.42 15.76 0.00 0.00 15747.51 3395.96 18588.39 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.04 16221.00 15.84 0.00 0.00 15626.25 2681.02 18469.24 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.04 16210.41 15.83 0.00 0.00 15626.00 2040.55 18350.08 00:23:49.774 [2024-11-18T14:25:41.848Z] =================================================================================================================== 00:23:49.774 [2024-11-18T14:25:41.848Z] Total : 129414.35 126.38 0.00 0.00 15743.87 2040.55 24546.21' 00:23:49.774 14:25:41 -- bdevperf/common.sh@32 -- # echo '[2024-11-18 14:25:38.469923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:49.774 [2024-11-18 14:25:38.470154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142274 ] 00:23:49.774 Using job config with 4 jobs 00:23:49.774 [2024-11-18 14:25:38.616251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.774 [2024-11-18 14:25:38.706828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.774 cpumask for '\''job0'\'' is too big 00:23:49.774 cpumask for '\''job1'\'' is too big 00:23:49.774 cpumask for '\''job2'\'' is too big 00:23:49.774 cpumask for '\''job3'\'' is too big 00:23:49.774 Running I/O for 2 seconds... 00:23:49.774 00:23:49.774 Latency(us) 00:23:49.774 [2024-11-18T14:25:41.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.02 16190.32 15.81 0.00 0.00 15817.63 3008.70 24546.21 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16179.56 15.80 0.00 0.00 15815.37 3515.11 24546.21 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.03 16169.06 15.79 0.00 0.00 15785.91 2889.54 21567.30 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16158.44 15.78 0.00 0.00 15783.09 3440.64 21567.30 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.03 16148.15 15.77 0.00 0.00 15751.02 2904.44 18588.39 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.03 16137.42 15.76 0.00 0.00 15747.51 3395.96 18588.39 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc0 : 2.04 16221.00 15.84 0.00 0.00 15626.25 2681.02 18469.24 00:23:49.774 [2024-11-18T14:25:41.848Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.774 Malloc1 : 2.04 16210.41 15.83 0.00 0.00 15626.00 2040.55 18350.08 00:23:49.774 [2024-11-18T14:25:41.849Z] =================================================================================================================== 00:23:49.775 [2024-11-18T14:25:41.849Z] Total : 129414.35 126.38 0.00 0.00 15743.87 2040.55 24546.21' 00:23:49.775 14:25:41 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:49.775 14:25:41 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:49.775 14:25:41 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:23:49.775 14:25:41 -- bdevperf/test_config.sh@44 -- # cleanup 00:23:49.775 14:25:41 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:49.775 14:25:41 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:49.775 ************************************ 00:23:49.775 END TEST bdevperf_config 00:23:49.775 ************************************ 00:23:49.775 00:23:49.775 real 0m11.267s 00:23:49.775 user 0m9.605s 00:23:49.775 sys 0m1.074s 00:23:49.775 14:25:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:49.775 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.775 14:25:41 -- spdk/autotest.sh@185 -- # uname -s 00:23:49.775 14:25:41 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:23:49.775 14:25:41 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:49.775 14:25:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:49.775 14:25:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:49.775 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.775 ************************************ 00:23:49.775 START TEST reactor_set_interrupt 00:23:49.775 ************************************ 00:23:49.775 14:25:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:49.775 * Looking for test storage... 00:23:49.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.775 14:25:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:49.775 14:25:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:49.775 14:25:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:49.775 14:25:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:49.775 14:25:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:49.775 14:25:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:49.775 14:25:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:49.775 14:25:41 -- scripts/common.sh@335 -- # IFS=.-: 00:23:49.775 14:25:41 -- scripts/common.sh@335 -- # read -ra ver1 00:23:49.775 14:25:41 -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.775 14:25:41 -- scripts/common.sh@336 -- # read -ra ver2 00:23:49.775 14:25:41 -- scripts/common.sh@337 -- # local 'op=<' 00:23:49.775 14:25:41 -- scripts/common.sh@339 -- # ver1_l=2 00:23:49.775 14:25:41 -- scripts/common.sh@340 -- # ver2_l=1 00:23:49.775 14:25:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:49.775 14:25:41 -- scripts/common.sh@343 -- # case "$op" in 00:23:49.775 14:25:41 -- scripts/common.sh@344 -- # : 1 00:23:49.775 14:25:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:49.775 14:25:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.775 14:25:41 -- scripts/common.sh@364 -- # decimal 1 00:23:49.775 14:25:41 -- scripts/common.sh@352 -- # local d=1 00:23:49.775 14:25:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.775 14:25:41 -- scripts/common.sh@354 -- # echo 1 00:23:49.775 14:25:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:49.775 14:25:41 -- scripts/common.sh@365 -- # decimal 2 00:23:49.775 14:25:41 -- scripts/common.sh@352 -- # local d=2 00:23:49.775 14:25:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.775 14:25:41 -- scripts/common.sh@354 -- # echo 2 00:23:49.775 14:25:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:49.775 14:25:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:49.775 14:25:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:49.775 14:25:41 -- scripts/common.sh@367 -- # return 0 00:23:49.775 14:25:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.775 14:25:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:49.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.775 --rc genhtml_branch_coverage=1 00:23:49.775 --rc genhtml_function_coverage=1 00:23:49.775 --rc genhtml_legend=1 00:23:49.775 --rc geninfo_all_blocks=1 00:23:49.775 --rc geninfo_unexecuted_blocks=1 00:23:49.775 00:23:49.775 ' 00:23:49.775 14:25:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:49.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.775 --rc genhtml_branch_coverage=1 00:23:49.775 --rc genhtml_function_coverage=1 00:23:49.775 --rc genhtml_legend=1 00:23:49.775 --rc geninfo_all_blocks=1 00:23:49.775 --rc geninfo_unexecuted_blocks=1 00:23:49.775 00:23:49.775 ' 00:23:49.775 14:25:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:49.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.775 --rc genhtml_branch_coverage=1 00:23:49.775 --rc genhtml_function_coverage=1 00:23:49.775 --rc genhtml_legend=1 00:23:49.775 --rc geninfo_all_blocks=1 00:23:49.775 --rc geninfo_unexecuted_blocks=1 00:23:49.775 00:23:49.775 ' 00:23:49.775 14:25:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:49.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.775 --rc genhtml_branch_coverage=1 00:23:49.775 --rc genhtml_function_coverage=1 00:23:49.775 --rc genhtml_legend=1 00:23:49.775 --rc geninfo_all_blocks=1 00:23:49.775 --rc geninfo_unexecuted_blocks=1 00:23:49.775 00:23:49.775 ' 00:23:49.775 14:25:41 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:23:49.775 14:25:41 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:49.775 14:25:41 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.775 14:25:41 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.775 14:25:41 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:23:49.775 14:25:41 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:49.775 14:25:41 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:49.775 14:25:41 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:49.775 14:25:41 -- common/autotest_common.sh@34 -- # set -e 00:23:49.775 14:25:41 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:49.775 14:25:41 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:49.775 14:25:41 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:49.775 14:25:41 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:49.775 14:25:41 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:49.775 14:25:41 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:49.775 14:25:41 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:49.775 14:25:41 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:49.775 14:25:41 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:49.775 14:25:41 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:49.775 14:25:41 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:49.775 14:25:41 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:49.775 14:25:41 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:49.775 14:25:41 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:49.775 14:25:41 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:49.775 14:25:41 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:49.775 14:25:41 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:49.775 14:25:41 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:49.775 14:25:41 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:49.775 14:25:41 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:49.775 14:25:41 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:49.775 14:25:41 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:49.775 14:25:41 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:49.775 14:25:41 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:49.775 14:25:41 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:49.775 14:25:41 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:49.775 14:25:41 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:49.775 14:25:41 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:49.775 14:25:41 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:49.775 14:25:41 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:23:49.775 14:25:41 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:49.775 14:25:41 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:23:49.775 14:25:41 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:49.775 14:25:41 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:49.775 14:25:41 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:49.775 14:25:41 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:49.775 14:25:41 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:49.775 14:25:41 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:49.775 14:25:41 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:49.775 14:25:41 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:23:49.775 14:25:41 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:49.775 14:25:41 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:49.775 14:25:41 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:49.775 14:25:41 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:49.775 14:25:41 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:23:49.775 14:25:41 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:49.775 14:25:41 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:23:49.775 14:25:41 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:49.775 14:25:41 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:49.775 14:25:41 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:23:49.775 14:25:41 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:23:49.775 14:25:41 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:49.775 14:25:41 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:23:49.775 14:25:41 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:23:49.775 14:25:41 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:23:49.775 14:25:41 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:23:49.775 14:25:41 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:23:49.775 14:25:41 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:23:49.775 14:25:41 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:23:49.775 14:25:41 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:23:49.775 14:25:41 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:23:49.775 14:25:41 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:23:49.775 14:25:41 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:23:49.775 14:25:41 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:23:49.775 14:25:41 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:49.775 14:25:41 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:23:49.775 14:25:41 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:23:49.775 14:25:41 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:23:49.775 14:25:41 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:23:49.775 14:25:41 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:49.775 14:25:41 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:23:49.775 14:25:41 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:23:49.775 14:25:41 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:23:49.775 14:25:41 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:23:49.775 14:25:41 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:23:49.775 14:25:41 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:23:49.775 14:25:41 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:23:49.775 14:25:41 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:23:49.775 14:25:41 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:23:49.775 14:25:41 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:23:49.775 14:25:41 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:49.775 14:25:41 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:23:49.775 14:25:41 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:23:49.775 14:25:41 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:49.775 14:25:41 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:49.775 14:25:41 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:49.775 14:25:41 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:49.775 14:25:41 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:49.775 14:25:41 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:49.775 14:25:41 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:49.775 14:25:41 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:49.775 14:25:41 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:49.775 14:25:41 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:49.775 14:25:41 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:49.775 14:25:41 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:49.775 14:25:41 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:49.775 14:25:41 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:49.775 14:25:41 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:49.775 14:25:41 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:49.775 #define SPDK_CONFIG_H 00:23:49.775 #define SPDK_CONFIG_APPS 1 00:23:49.775 #define SPDK_CONFIG_ARCH native 00:23:49.775 #define SPDK_CONFIG_ASAN 1 00:23:49.775 #undef SPDK_CONFIG_AVAHI 00:23:49.775 #undef SPDK_CONFIG_CET 00:23:49.775 #define SPDK_CONFIG_COVERAGE 1 00:23:49.775 #define SPDK_CONFIG_CROSS_PREFIX 00:23:49.775 #undef SPDK_CONFIG_CRYPTO 00:23:49.775 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:49.775 #undef SPDK_CONFIG_CUSTOMOCF 00:23:49.775 #undef SPDK_CONFIG_DAOS 00:23:49.775 #define SPDK_CONFIG_DAOS_DIR 00:23:49.775 #define SPDK_CONFIG_DEBUG 1 00:23:49.775 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:49.775 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:23:49.776 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:23:49.776 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:23:49.776 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:49.776 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:49.776 #define SPDK_CONFIG_EXAMPLES 1 00:23:49.776 #undef SPDK_CONFIG_FC 00:23:49.776 #define SPDK_CONFIG_FC_PATH 00:23:49.776 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:49.776 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:49.776 #undef SPDK_CONFIG_FUSE 00:23:49.776 #undef SPDK_CONFIG_FUZZER 00:23:49.776 #define SPDK_CONFIG_FUZZER_LIB 00:23:49.776 #undef SPDK_CONFIG_GOLANG 00:23:49.776 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:23:49.776 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:49.776 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:49.776 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:49.776 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:49.776 #define SPDK_CONFIG_IDXD 1 00:23:49.776 #undef SPDK_CONFIG_IDXD_KERNEL 00:23:49.776 #undef SPDK_CONFIG_IPSEC_MB 00:23:49.776 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:49.776 #define SPDK_CONFIG_ISAL 1 00:23:49.776 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:49.776 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:49.776 #define SPDK_CONFIG_LIBDIR 00:23:49.776 #undef SPDK_CONFIG_LTO 00:23:49.776 #define SPDK_CONFIG_MAX_LCORES 00:23:49.776 #define SPDK_CONFIG_NVME_CUSE 1 00:23:49.776 #undef SPDK_CONFIG_OCF 00:23:49.776 #define SPDK_CONFIG_OCF_PATH 00:23:49.776 #define SPDK_CONFIG_OPENSSL_PATH 00:23:49.776 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:49.776 #undef SPDK_CONFIG_PGO_USE 00:23:49.776 #define SPDK_CONFIG_PREFIX /usr/local 00:23:49.776 #define SPDK_CONFIG_RAID5F 1 00:23:49.776 #undef SPDK_CONFIG_RBD 00:23:49.776 #define SPDK_CONFIG_RDMA 1 00:23:49.776 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:49.776 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:49.776 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:49.776 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:49.776 #undef SPDK_CONFIG_SHARED 00:23:49.776 #undef SPDK_CONFIG_SMA 00:23:49.776 #define SPDK_CONFIG_TESTS 1 00:23:49.776 #undef SPDK_CONFIG_TSAN 00:23:49.776 #undef SPDK_CONFIG_UBLK 00:23:49.776 #define SPDK_CONFIG_UBSAN 1 00:23:49.776 #define SPDK_CONFIG_UNIT_TESTS 1 00:23:49.776 #undef SPDK_CONFIG_URING 00:23:49.776 #define SPDK_CONFIG_URING_PATH 00:23:49.776 #undef SPDK_CONFIG_URING_ZNS 00:23:49.776 #undef SPDK_CONFIG_USDT 00:23:49.776 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:49.776 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:49.776 #undef SPDK_CONFIG_VFIO_USER 00:23:49.776 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:49.776 #define SPDK_CONFIG_VHOST 1 00:23:49.776 #define SPDK_CONFIG_VIRTIO 1 00:23:49.776 #undef SPDK_CONFIG_VTUNE 00:23:49.776 #define SPDK_CONFIG_VTUNE_DIR 00:23:49.776 #define SPDK_CONFIG_WERROR 1 00:23:49.776 #define SPDK_CONFIG_WPDK_DIR 00:23:49.776 #undef SPDK_CONFIG_XNVME 00:23:49.776 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:49.776 14:25:41 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:49.776 14:25:41 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.776 14:25:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.776 14:25:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.776 14:25:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.776 14:25:41 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.776 14:25:41 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.776 14:25:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.776 14:25:41 -- paths/export.sh@5 -- # export PATH 00:23:49.776 14:25:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.776 14:25:41 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:49.776 14:25:41 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:49.776 14:25:41 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:49.776 14:25:41 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:49.776 14:25:41 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:49.776 14:25:41 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:49.776 14:25:41 -- pm/common@16 -- # TEST_TAG=N/A 00:23:49.776 14:25:41 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:49.776 14:25:41 -- common/autotest_common.sh@52 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:23:49.776 14:25:41 -- common/autotest_common.sh@56 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:49.776 14:25:41 -- common/autotest_common.sh@58 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:23:49.776 14:25:41 -- common/autotest_common.sh@60 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:49.776 14:25:41 -- common/autotest_common.sh@62 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:23:49.776 14:25:41 -- common/autotest_common.sh@64 -- # : 00:23:49.776 14:25:41 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:23:49.776 14:25:41 -- common/autotest_common.sh@66 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:23:49.776 14:25:41 -- common/autotest_common.sh@68 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:23:49.776 14:25:41 -- common/autotest_common.sh@70 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:23:49.776 14:25:41 -- common/autotest_common.sh@72 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:49.776 14:25:41 -- common/autotest_common.sh@74 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:23:49.776 14:25:41 -- common/autotest_common.sh@76 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:23:49.776 14:25:41 -- common/autotest_common.sh@78 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:23:49.776 14:25:41 -- common/autotest_common.sh@80 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:23:49.776 14:25:41 -- common/autotest_common.sh@82 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:23:49.776 14:25:41 -- common/autotest_common.sh@84 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:23:49.776 14:25:41 -- common/autotest_common.sh@86 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:23:49.776 14:25:41 -- common/autotest_common.sh@88 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:23:49.776 14:25:41 -- common/autotest_common.sh@90 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:49.776 14:25:41 -- common/autotest_common.sh@92 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:23:49.776 14:25:41 -- common/autotest_common.sh@94 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:23:49.776 14:25:41 -- common/autotest_common.sh@96 -- # : rdma 00:23:49.776 14:25:41 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:49.776 14:25:41 -- common/autotest_common.sh@98 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:23:49.776 14:25:41 -- common/autotest_common.sh@100 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:23:49.776 14:25:41 -- common/autotest_common.sh@102 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:23:49.776 14:25:41 -- common/autotest_common.sh@104 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:23:49.776 14:25:41 -- common/autotest_common.sh@106 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:23:49.776 14:25:41 -- common/autotest_common.sh@108 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:23:49.776 14:25:41 -- common/autotest_common.sh@110 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:23:49.776 14:25:41 -- common/autotest_common.sh@112 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:49.776 14:25:41 -- common/autotest_common.sh@114 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:23:49.776 14:25:41 -- common/autotest_common.sh@116 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:23:49.776 14:25:41 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:23:49.776 14:25:41 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:49.776 14:25:41 -- common/autotest_common.sh@120 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:23:49.776 14:25:41 -- common/autotest_common.sh@122 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:23:49.776 14:25:41 -- common/autotest_common.sh@124 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:23:49.776 14:25:41 -- common/autotest_common.sh@126 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:23:49.776 14:25:41 -- common/autotest_common.sh@128 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:23:49.776 14:25:41 -- common/autotest_common.sh@130 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:23:49.776 14:25:41 -- common/autotest_common.sh@132 -- # : v22.11.4 00:23:49.776 14:25:41 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:23:49.776 14:25:41 -- common/autotest_common.sh@134 -- # : true 00:23:49.776 14:25:41 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:23:49.776 14:25:41 -- common/autotest_common.sh@136 -- # : 1 00:23:49.776 14:25:41 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:23:49.776 14:25:41 -- common/autotest_common.sh@138 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:23:49.776 14:25:41 -- common/autotest_common.sh@140 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:23:49.776 14:25:41 -- common/autotest_common.sh@142 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:23:49.776 14:25:41 -- common/autotest_common.sh@144 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:23:49.776 14:25:41 -- common/autotest_common.sh@146 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:23:49.776 14:25:41 -- common/autotest_common.sh@148 -- # : 00:23:49.776 14:25:41 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:23:49.776 14:25:41 -- common/autotest_common.sh@150 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:23:49.776 14:25:41 -- common/autotest_common.sh@152 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:23:49.776 14:25:41 -- common/autotest_common.sh@154 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:23:49.776 14:25:41 -- common/autotest_common.sh@156 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:23:49.776 14:25:41 -- common/autotest_common.sh@158 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:23:49.776 14:25:41 -- common/autotest_common.sh@160 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:23:49.776 14:25:41 -- common/autotest_common.sh@163 -- # : 00:23:49.776 14:25:41 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:23:49.776 14:25:41 -- common/autotest_common.sh@165 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:23:49.776 14:25:41 -- common/autotest_common.sh@167 -- # : 0 00:23:49.776 14:25:41 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:49.776 14:25:41 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.776 14:25:41 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.777 14:25:41 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:49.777 14:25:41 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:49.777 14:25:41 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:49.777 14:25:41 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:49.777 14:25:41 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:49.777 14:25:41 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:23:49.777 14:25:41 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:49.777 14:25:41 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:49.777 14:25:41 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:49.777 14:25:41 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:49.777 14:25:41 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:49.777 14:25:41 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:23:49.777 14:25:41 -- common/autotest_common.sh@196 -- # cat 00:23:49.777 14:25:41 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:23:49.777 14:25:41 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:49.777 14:25:41 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:49.777 14:25:41 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:49.777 14:25:41 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:49.777 14:25:41 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:23:49.777 14:25:41 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:23:49.777 14:25:41 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:49.777 14:25:41 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:49.777 14:25:41 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:49.777 14:25:41 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:49.777 14:25:41 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:23:49.777 14:25:41 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:23:49.777 14:25:41 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:49.777 14:25:41 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:49.777 14:25:41 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:49.777 14:25:41 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:49.777 14:25:41 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:49.777 14:25:41 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:49.777 14:25:41 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:23:49.777 14:25:41 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:23:49.777 14:25:41 -- common/autotest_common.sh@249 -- # _LCOV= 00:23:49.777 14:25:41 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:23:49.777 14:25:41 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:23:49.777 14:25:41 -- common/autotest_common.sh@255 -- # lcov_opt= 00:23:49.777 14:25:41 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:23:49.777 14:25:41 -- common/autotest_common.sh@259 -- # export valgrind= 00:23:49.777 14:25:41 -- common/autotest_common.sh@259 -- # valgrind= 00:23:49.777 14:25:41 -- common/autotest_common.sh@265 -- # uname -s 00:23:49.777 14:25:41 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:23:49.777 14:25:41 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:23:49.777 14:25:41 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:23:49.777 14:25:41 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:23:49.777 14:25:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@275 -- # MAKE=make 00:23:49.777 14:25:41 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:23:49.777 14:25:41 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:23:49.777 14:25:41 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:23:49.777 14:25:41 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:49.777 14:25:41 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:23:49.777 14:25:41 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:23:49.777 14:25:41 -- common/autotest_common.sh@319 -- # [[ -z 142341 ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@319 -- # kill -0 142341 00:23:49.777 14:25:41 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:23:49.777 14:25:41 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:23:49.777 14:25:41 -- common/autotest_common.sh@332 -- # local mount target_dir 00:23:49.777 14:25:41 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:23:49.777 14:25:41 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:23:49.777 14:25:41 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:23:49.777 14:25:41 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:23:49.777 14:25:41 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.jCHSM9 00:23:49.777 14:25:41 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:49.777 14:25:41 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.jCHSM9/tests/interrupt /tmp/spdk.jCHSM9 00:23:49.777 14:25:41 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@328 -- # df -T 00:23:49.777 14:25:41 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=4726784 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=9441337344 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=11158679552 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267142144 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268399616 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:23:49.777 14:25:41 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # avails["$mount"]=98262712320 00:23:49.777 14:25:41 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:23:49.777 14:25:41 -- common/autotest_common.sh@364 -- # uses["$mount"]=1440067584 00:23:49.777 14:25:41 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:49.777 14:25:41 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:23:49.777 * Looking for test storage... 00:23:49.777 14:25:41 -- common/autotest_common.sh@369 -- # local target_space new_size 00:23:49.777 14:25:41 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:23:49.777 14:25:41 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.777 14:25:41 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:49.777 14:25:41 -- common/autotest_common.sh@373 -- # mount=/ 00:23:49.777 14:25:41 -- common/autotest_common.sh@375 -- # target_space=9441337344 00:23:49.777 14:25:41 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:23:49.777 14:25:41 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:23:49.777 14:25:41 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@382 -- # new_size=13373272064 00:23:49.777 14:25:41 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:49.777 14:25:41 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.777 14:25:41 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.777 14:25:41 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.777 14:25:41 -- common/autotest_common.sh@390 -- # return 0 00:23:49.777 14:25:41 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:23:49.777 14:25:41 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:23:49.777 14:25:41 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:49.777 14:25:41 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:49.777 14:25:41 -- common/autotest_common.sh@1682 -- # true 00:23:49.777 14:25:41 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:23:49.777 14:25:41 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:49.777 14:25:41 -- common/autotest_common.sh@27 -- # exec 00:23:49.777 14:25:41 -- common/autotest_common.sh@29 -- # exec 00:23:49.777 14:25:41 -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:49.778 14:25:41 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:49.778 14:25:41 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:49.778 14:25:41 -- common/autotest_common.sh@18 -- # set -x 00:23:49.778 14:25:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:49.778 14:25:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:49.778 14:25:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:49.778 14:25:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:49.778 14:25:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:49.778 14:25:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:49.778 14:25:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:49.778 14:25:41 -- scripts/common.sh@335 -- # IFS=.-: 00:23:49.778 14:25:41 -- scripts/common.sh@335 -- # read -ra ver1 00:23:49.778 14:25:41 -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.778 14:25:41 -- scripts/common.sh@336 -- # read -ra ver2 00:23:49.778 14:25:41 -- scripts/common.sh@337 -- # local 'op=<' 00:23:49.778 14:25:41 -- scripts/common.sh@339 -- # ver1_l=2 00:23:49.778 14:25:41 -- scripts/common.sh@340 -- # ver2_l=1 00:23:49.778 14:25:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:49.778 14:25:41 -- scripts/common.sh@343 -- # case "$op" in 00:23:49.778 14:25:41 -- scripts/common.sh@344 -- # : 1 00:23:49.778 14:25:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:49.778 14:25:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.778 14:25:41 -- scripts/common.sh@364 -- # decimal 1 00:23:49.778 14:25:41 -- scripts/common.sh@352 -- # local d=1 00:23:49.778 14:25:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.778 14:25:41 -- scripts/common.sh@354 -- # echo 1 00:23:49.778 14:25:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:49.778 14:25:41 -- scripts/common.sh@365 -- # decimal 2 00:23:49.778 14:25:41 -- scripts/common.sh@352 -- # local d=2 00:23:49.778 14:25:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.778 14:25:41 -- scripts/common.sh@354 -- # echo 2 00:23:49.778 14:25:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:49.778 14:25:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:49.778 14:25:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:49.778 14:25:41 -- scripts/common.sh@367 -- # return 0 00:23:49.778 14:25:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.778 14:25:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.778 --rc genhtml_branch_coverage=1 00:23:49.778 --rc genhtml_function_coverage=1 00:23:49.778 --rc genhtml_legend=1 00:23:49.778 --rc geninfo_all_blocks=1 00:23:49.778 --rc geninfo_unexecuted_blocks=1 00:23:49.778 00:23:49.778 ' 00:23:49.778 14:25:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.778 --rc genhtml_branch_coverage=1 00:23:49.778 --rc genhtml_function_coverage=1 00:23:49.778 --rc genhtml_legend=1 00:23:49.778 --rc geninfo_all_blocks=1 00:23:49.778 --rc geninfo_unexecuted_blocks=1 00:23:49.778 00:23:49.778 ' 00:23:49.778 14:25:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.778 --rc genhtml_branch_coverage=1 00:23:49.778 --rc genhtml_function_coverage=1 00:23:49.778 --rc genhtml_legend=1 00:23:49.778 --rc geninfo_all_blocks=1 00:23:49.778 --rc geninfo_unexecuted_blocks=1 00:23:49.778 00:23:49.778 ' 00:23:49.778 14:25:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:49.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.778 --rc genhtml_branch_coverage=1 00:23:49.778 --rc genhtml_function_coverage=1 00:23:49.778 --rc genhtml_legend=1 00:23:49.778 --rc geninfo_all_blocks=1 00:23:49.778 --rc geninfo_unexecuted_blocks=1 00:23:49.778 00:23:49.778 ' 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:49.778 14:25:41 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:49.778 14:25:41 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:49.778 14:25:41 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142406 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.778 14:25:41 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142406 /var/tmp/spdk.sock 00:23:49.778 14:25:41 -- common/autotest_common.sh@829 -- # '[' -z 142406 ']' 00:23:49.778 14:25:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.778 14:25:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.778 14:25:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.778 14:25:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.778 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.778 [2024-11-18 14:25:41.699313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:49.778 [2024-11-18 14:25:41.699581] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142406 ] 00:23:50.037 [2024-11-18 14:25:41.867874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.037 [2024-11-18 14:25:41.942392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.037 [2024-11-18 14:25:41.942533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.037 [2024-11-18 14:25:41.942528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.037 [2024-11-18 14:25:42.021960] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:50.974 14:25:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.974 14:25:42 -- common/autotest_common.sh@862 -- # return 0 00:23:50.974 14:25:42 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:23:50.974 14:25:42 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:50.974 Malloc0 00:23:50.974 Malloc1 00:23:50.974 Malloc2 00:23:50.974 14:25:42 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:23:50.974 14:25:42 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:50.974 14:25:43 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:50.974 14:25:43 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:50.974 5000+0 records in 00:23:50.974 5000+0 records out 00:23:50.974 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0282928 s, 362 MB/s 00:23:50.974 14:25:43 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:51.233 AIO0 00:23:51.233 14:25:43 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 142406 00:23:51.233 14:25:43 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 142406 without_thd 00:23:51.233 14:25:43 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142406 00:23:51.233 14:25:43 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:23:51.233 14:25:43 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:51.233 14:25:43 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:51.233 14:25:43 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:23:51.233 14:25:43 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:51.233 14:25:43 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:23:51.233 14:25:43 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.233 14:25:43 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.233 14:25:43 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:23:51.492 14:25:43 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:51.492 14:25:43 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:51.492 14:25:43 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:23:51.751 14:25:43 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:51.751 spdk_thread ids are 1 on reactor0. 00:23:51.751 14:25:43 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:51.751 14:25:43 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:51.751 14:25:43 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142406 0 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142406 0 idle 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:51.751 14:25:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142406 root 20 0 20.1t 57708 25784 S 0.0 0.5 0:00.34 reactor_0' 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@48 -- # echo 142406 root 20 0 20.1t 57708 25784 S 0.0 0.5 0:00.34 reactor_0 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:52.010 14:25:43 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:52.010 14:25:43 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142406 1 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142406 1 idle 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:52.010 14:25:43 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:23:52.269 14:25:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142415 root 20 0 20.1t 57708 25784 S 0.0 0.5 0:00.00 reactor_1' 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # echo 142415 root 20 0 20.1t 57708 25784 S 0.0 0.5 0:00.00 reactor_1 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:52.270 14:25:44 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:52.270 14:25:44 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142406 2 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142406 2 idle 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142416 root 20 0 20.1t 57708 25784 S 0.0 0.5 0:00.00 reactor_2' 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # echo 142416 root 20 0 20.1t 57708 25784 S 0.0 0.5 0:00.00 reactor_2 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:52.270 14:25:44 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:52.270 14:25:44 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:23:52.270 14:25:44 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:23:52.270 14:25:44 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:23:52.529 [2024-11-18 14:25:44.582659] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:52.529 14:25:44 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:52.788 [2024-11-18 14:25:44.846457] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:52.788 [2024-11-18 14:25:44.847117] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:52.788 14:25:44 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:53.047 [2024-11-18 14:25:45.094262] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:53.047 [2024-11-18 14:25:45.094727] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:53.047 14:25:45 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:53.047 14:25:45 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142406 0 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142406 0 busy 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:53.047 14:25:45 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142406 root 20 0 20.1t 57880 25784 R 99.9 0.5 0:00.77 reactor_0' 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@48 -- # echo 142406 root 20 0 20.1t 57880 25784 R 99.9 0.5 0:00.77 reactor_0 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:53.306 14:25:45 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:53.306 14:25:45 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142406 2 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142406 2 busy 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:53.306 14:25:45 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:53.307 14:25:45 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:53.307 14:25:45 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142416 root 20 0 20.1t 57880 25784 R 99.9 0.5 0:00.34 reactor_2' 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@48 -- # echo 142416 root 20 0 20.1t 57880 25784 R 99.9 0.5 0:00.34 reactor_2 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:53.566 14:25:45 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:53.566 14:25:45 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:53.824 [2024-11-18 14:25:45.642266] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:53.824 [2024-11-18 14:25:45.642675] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:53.824 14:25:45 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:23:53.825 14:25:45 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142406 2 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142406 2 idle 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142416 root 20 0 20.1t 57928 25784 S 0.0 0.5 0:00.54 reactor_2' 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@48 -- # echo 142416 root 20 0 20.1t 57928 25784 S 0.0 0.5 0:00.54 reactor_2 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:53.825 14:25:45 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:53.825 14:25:45 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:54.083 [2024-11-18 14:25:46.006277] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:54.083 [2024-11-18 14:25:46.006694] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:54.083 14:25:46 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:23:54.083 14:25:46 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:23:54.084 14:25:46 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:23:54.342 [2024-11-18 14:25:46.270602] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:54.342 14:25:46 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142406 0 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142406 0 idle 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@33 -- # local pid=142406 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142406 -w 256 00:23:54.342 14:25:46 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142406 root 20 0 20.1t 58032 25784 S 6.7 0.5 0:01.52 reactor_0' 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@48 -- # echo 142406 root 20 0 20.1t 58032 25784 S 6.7 0.5 0:01.52 reactor_0 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:23:54.602 14:25:46 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:54.602 14:25:46 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:23:54.602 14:25:46 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:23:54.602 14:25:46 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:23:54.602 14:25:46 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 142406 00:23:54.602 14:25:46 -- common/autotest_common.sh@936 -- # '[' -z 142406 ']' 00:23:54.602 14:25:46 -- common/autotest_common.sh@940 -- # kill -0 142406 00:23:54.602 14:25:46 -- common/autotest_common.sh@941 -- # uname 00:23:54.602 14:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:54.602 14:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142406 00:23:54.602 14:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:54.602 14:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:54.602 killing process with pid 142406 00:23:54.602 14:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142406' 00:23:54.602 14:25:46 -- common/autotest_common.sh@955 -- # kill 142406 00:23:54.602 14:25:46 -- common/autotest_common.sh@960 -- # wait 142406 00:23:54.861 14:25:46 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:23:54.861 14:25:46 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142553 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.861 14:25:46 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142553 /var/tmp/spdk.sock 00:23:54.861 14:25:46 -- common/autotest_common.sh@829 -- # '[' -z 142553 ']' 00:23:54.861 14:25:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.861 14:25:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.861 14:25:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.861 14:25:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.861 14:25:46 -- common/autotest_common.sh@10 -- # set +x 00:23:54.861 [2024-11-18 14:25:46.807251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:54.861 [2024-11-18 14:25:46.807490] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142553 ] 00:23:55.120 [2024-11-18 14:25:46.954174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:55.121 [2024-11-18 14:25:47.017706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.121 [2024-11-18 14:25:47.017861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.121 [2024-11-18 14:25:47.017856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.121 [2024-11-18 14:25:47.094603] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:55.688 14:25:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.688 14:25:47 -- common/autotest_common.sh@862 -- # return 0 00:23:55.688 14:25:47 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:23:55.688 14:25:47 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:55.946 Malloc0 00:23:55.946 Malloc1 00:23:55.946 Malloc2 00:23:55.946 14:25:48 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:23:55.947 14:25:48 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:55.947 14:25:48 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:56.205 14:25:48 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:56.205 5000+0 records in 00:23:56.205 5000+0 records out 00:23:56.205 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0268854 s, 381 MB/s 00:23:56.205 14:25:48 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:56.464 AIO0 00:23:56.464 14:25:48 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 142553 00:23:56.464 14:25:48 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 142553 00:23:56.464 14:25:48 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=142553 00:23:56.464 14:25:48 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:23:56.464 14:25:48 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:56.464 14:25:48 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:56.464 14:25:48 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:23:56.464 14:25:48 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:56.464 14:25:48 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:23:56.464 14:25:48 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:56.464 14:25:48 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:56.464 14:25:48 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:23:56.724 14:25:48 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:56.724 14:25:48 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:23:56.724 spdk_thread ids are 1 on reactor0. 00:23:56.724 14:25:48 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:56.724 14:25:48 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:56.724 14:25:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:56.724 14:25:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142553 0 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142553 0 idle 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:56.724 14:25:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142553 root 20 0 20.1t 56528 25736 S 0.0 0.5 0:00.27 reactor_0' 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@48 -- # echo 142553 root 20 0 20.1t 56528 25736 S 0.0 0.5 0:00.27 reactor_0 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:56.984 14:25:48 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:56.984 14:25:48 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142553 1 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142553 1 idle 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:56.984 14:25:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142557 root 20 0 20.1t 56528 25736 S 0.0 0.5 0:00.00 reactor_1' 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # echo 142557 root 20 0 20.1t 56528 25736 S 0.0 0.5 0:00.00 reactor_1 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:57.243 14:25:49 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:57.243 14:25:49 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 142553 2 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142553 2 idle 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142558 root 20 0 20.1t 56528 25736 S 0.0 0.5 0:00.00 reactor_2' 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # echo 142558 root 20 0 20.1t 56528 25736 S 0.0 0.5 0:00.00 reactor_2 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:57.243 14:25:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:57.244 14:25:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:57.244 14:25:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:57.244 14:25:49 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:57.244 14:25:49 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:57.244 14:25:49 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:57.244 14:25:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:57.244 14:25:49 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:23:57.244 14:25:49 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:57.503 [2024-11-18 14:25:49.534894] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:57.503 [2024-11-18 14:25:49.535207] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:23:57.503 [2024-11-18 14:25:49.535523] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:57.503 14:25:49 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:57.762 [2024-11-18 14:25:49.726764] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:57.762 [2024-11-18 14:25:49.727179] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:57.762 14:25:49 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:57.762 14:25:49 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142553 0 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142553 0 busy 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:57.762 14:25:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142553 root 20 0 20.1t 56640 25736 R 99.9 0.5 0:00.65 reactor_0' 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@48 -- # echo 142553 root 20 0 20.1t 56640 25736 R 99.9 0.5 0:00.65 reactor_0 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:58.021 14:25:49 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:58.021 14:25:49 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 142553 2 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 142553 2 busy 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:58.021 14:25:49 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142558 root 20 0 20.1t 56640 25736 R 99.9 0.5 0:00.35 reactor_2' 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@48 -- # echo 142558 root 20 0 20.1t 56640 25736 R 99.9 0.5 0:00.35 reactor_2 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:58.021 14:25:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:58.021 14:25:50 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:58.280 [2024-11-18 14:25:50.323059] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:58.280 [2024-11-18 14:25:50.323350] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:58.280 14:25:50 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:23:58.280 14:25:50 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 142553 2 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142553 2 idle 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:58.280 14:25:50 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142558 root 20 0 20.1t 56700 25736 S 0.0 0.5 0:00.59 reactor_2' 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@48 -- # echo 142558 root 20 0 20.1t 56700 25736 S 0.0 0.5 0:00.59 reactor_2 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:58.538 14:25:50 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:58.539 14:25:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:58.539 14:25:50 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:58.798 [2024-11-18 14:25:50.739114] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:58.798 [2024-11-18 14:25:50.739586] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:23:58.798 [2024-11-18 14:25:50.739640] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:58.798 14:25:50 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:23:58.798 14:25:50 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 142553 0 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 142553 0 idle 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@33 -- # local pid=142553 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:58.798 14:25:50 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 142553 -w 256 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 142553 root 20 0 20.1t 56756 25736 S 0.0 0.5 0:01.49 reactor_0' 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@48 -- # echo 142553 root 20 0 20.1t 56756 25736 S 0.0 0.5 0:01.49 reactor_0 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:59.058 14:25:50 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:59.058 14:25:50 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:23:59.058 14:25:50 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:23:59.058 14:25:50 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:59.058 14:25:50 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 142553 00:23:59.058 14:25:50 -- common/autotest_common.sh@936 -- # '[' -z 142553 ']' 00:23:59.058 14:25:50 -- common/autotest_common.sh@940 -- # kill -0 142553 00:23:59.058 14:25:50 -- common/autotest_common.sh@941 -- # uname 00:23:59.058 14:25:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:59.058 14:25:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142553 00:23:59.058 14:25:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:59.058 14:25:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:59.058 14:25:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142553' 00:23:59.058 killing process with pid 142553 00:23:59.058 14:25:50 -- common/autotest_common.sh@955 -- # kill 142553 00:23:59.058 14:25:50 -- common/autotest_common.sh@960 -- # wait 142553 00:23:59.318 14:25:51 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:23:59.318 14:25:51 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:23:59.318 ************************************ 00:23:59.318 END TEST reactor_set_interrupt 00:23:59.318 ************************************ 00:23:59.318 00:23:59.318 real 0m10.004s 00:23:59.318 user 0m9.551s 00:23:59.318 sys 0m1.618s 00:23:59.318 14:25:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:59.318 14:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.318 14:25:51 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:23:59.318 14:25:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:59.318 14:25:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.318 14:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.318 ************************************ 00:23:59.318 START TEST reap_unregistered_poller 00:23:59.318 ************************************ 00:23:59.318 14:25:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:23:59.318 * Looking for test storage... 00:23:59.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.580 14:25:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:59.580 14:25:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:59.580 14:25:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:59.580 14:25:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:59.580 14:25:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:59.580 14:25:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:59.580 14:25:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:59.580 14:25:51 -- scripts/common.sh@335 -- # IFS=.-: 00:23:59.580 14:25:51 -- scripts/common.sh@335 -- # read -ra ver1 00:23:59.580 14:25:51 -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.580 14:25:51 -- scripts/common.sh@336 -- # read -ra ver2 00:23:59.580 14:25:51 -- scripts/common.sh@337 -- # local 'op=<' 00:23:59.580 14:25:51 -- scripts/common.sh@339 -- # ver1_l=2 00:23:59.580 14:25:51 -- scripts/common.sh@340 -- # ver2_l=1 00:23:59.580 14:25:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:59.580 14:25:51 -- scripts/common.sh@343 -- # case "$op" in 00:23:59.580 14:25:51 -- scripts/common.sh@344 -- # : 1 00:23:59.580 14:25:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:59.580 14:25:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.580 14:25:51 -- scripts/common.sh@364 -- # decimal 1 00:23:59.580 14:25:51 -- scripts/common.sh@352 -- # local d=1 00:23:59.580 14:25:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.580 14:25:51 -- scripts/common.sh@354 -- # echo 1 00:23:59.580 14:25:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:59.580 14:25:51 -- scripts/common.sh@365 -- # decimal 2 00:23:59.580 14:25:51 -- scripts/common.sh@352 -- # local d=2 00:23:59.580 14:25:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.580 14:25:51 -- scripts/common.sh@354 -- # echo 2 00:23:59.580 14:25:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:59.580 14:25:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:59.580 14:25:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:59.580 14:25:51 -- scripts/common.sh@367 -- # return 0 00:23:59.580 14:25:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.580 14:25:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.580 --rc genhtml_branch_coverage=1 00:23:59.580 --rc genhtml_function_coverage=1 00:23:59.580 --rc genhtml_legend=1 00:23:59.580 --rc geninfo_all_blocks=1 00:23:59.580 --rc geninfo_unexecuted_blocks=1 00:23:59.580 00:23:59.580 ' 00:23:59.580 14:25:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.580 --rc genhtml_branch_coverage=1 00:23:59.580 --rc genhtml_function_coverage=1 00:23:59.580 --rc genhtml_legend=1 00:23:59.580 --rc geninfo_all_blocks=1 00:23:59.580 --rc geninfo_unexecuted_blocks=1 00:23:59.580 00:23:59.580 ' 00:23:59.580 14:25:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.580 --rc genhtml_branch_coverage=1 00:23:59.580 --rc genhtml_function_coverage=1 00:23:59.580 --rc genhtml_legend=1 00:23:59.580 --rc geninfo_all_blocks=1 00:23:59.580 --rc geninfo_unexecuted_blocks=1 00:23:59.580 00:23:59.580 ' 00:23:59.580 14:25:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.580 --rc genhtml_branch_coverage=1 00:23:59.580 --rc genhtml_function_coverage=1 00:23:59.580 --rc genhtml_legend=1 00:23:59.580 --rc geninfo_all_blocks=1 00:23:59.580 --rc geninfo_unexecuted_blocks=1 00:23:59.580 00:23:59.580 ' 00:23:59.581 14:25:51 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:23:59.581 14:25:51 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:23:59.581 14:25:51 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.581 14:25:51 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.581 14:25:51 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:23:59.581 14:25:51 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:59.581 14:25:51 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:59.581 14:25:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:59.581 14:25:51 -- common/autotest_common.sh@34 -- # set -e 00:23:59.581 14:25:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:59.581 14:25:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:59.581 14:25:51 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:59.581 14:25:51 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:59.581 14:25:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:59.581 14:25:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:59.581 14:25:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:59.581 14:25:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:59.581 14:25:51 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:59.581 14:25:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:59.581 14:25:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:59.581 14:25:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:59.581 14:25:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:59.581 14:25:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:59.581 14:25:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:59.581 14:25:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:59.581 14:25:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:59.581 14:25:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:59.581 14:25:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:59.581 14:25:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:59.581 14:25:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:59.581 14:25:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:59.581 14:25:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:59.581 14:25:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:59.581 14:25:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:59.581 14:25:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:59.581 14:25:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:59.581 14:25:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:59.581 14:25:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:59.581 14:25:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:23:59.581 14:25:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:59.581 14:25:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:23:59.581 14:25:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:59.581 14:25:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:59.581 14:25:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:59.581 14:25:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:59.581 14:25:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:59.581 14:25:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:59.581 14:25:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:59.581 14:25:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:23:59.581 14:25:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:59.581 14:25:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:59.581 14:25:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:59.581 14:25:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:59.581 14:25:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:23:59.581 14:25:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:59.581 14:25:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:23:59.581 14:25:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:59.581 14:25:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:59.581 14:25:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:23:59.581 14:25:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:23:59.581 14:25:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:59.581 14:25:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:23:59.581 14:25:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:23:59.581 14:25:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:23:59.581 14:25:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:23:59.581 14:25:51 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:23:59.581 14:25:51 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:23:59.581 14:25:51 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:23:59.581 14:25:51 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:23:59.581 14:25:51 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:23:59.581 14:25:51 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:23:59.581 14:25:51 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:23:59.581 14:25:51 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:23:59.581 14:25:51 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:59.581 14:25:51 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:23:59.581 14:25:51 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:23:59.581 14:25:51 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:23:59.581 14:25:51 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:23:59.581 14:25:51 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:59.581 14:25:51 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:23:59.581 14:25:51 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:23:59.581 14:25:51 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:23:59.581 14:25:51 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:23:59.581 14:25:51 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:23:59.581 14:25:51 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:23:59.581 14:25:51 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:23:59.581 14:25:51 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:23:59.581 14:25:51 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:23:59.581 14:25:51 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:23:59.581 14:25:51 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:59.581 14:25:51 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:23:59.581 14:25:51 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:23:59.581 14:25:51 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:59.581 14:25:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:59.581 14:25:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:59.581 14:25:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:59.581 14:25:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:59.581 14:25:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:59.581 14:25:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:59.581 14:25:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:59.581 14:25:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:59.581 14:25:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:59.581 14:25:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:59.581 14:25:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:59.581 14:25:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:59.581 14:25:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:59.581 14:25:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:59.581 14:25:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:59.581 #define SPDK_CONFIG_H 00:23:59.581 #define SPDK_CONFIG_APPS 1 00:23:59.581 #define SPDK_CONFIG_ARCH native 00:23:59.581 #define SPDK_CONFIG_ASAN 1 00:23:59.581 #undef SPDK_CONFIG_AVAHI 00:23:59.581 #undef SPDK_CONFIG_CET 00:23:59.581 #define SPDK_CONFIG_COVERAGE 1 00:23:59.581 #define SPDK_CONFIG_CROSS_PREFIX 00:23:59.581 #undef SPDK_CONFIG_CRYPTO 00:23:59.581 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:59.581 #undef SPDK_CONFIG_CUSTOMOCF 00:23:59.581 #undef SPDK_CONFIG_DAOS 00:23:59.581 #define SPDK_CONFIG_DAOS_DIR 00:23:59.581 #define SPDK_CONFIG_DEBUG 1 00:23:59.581 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:59.581 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:23:59.581 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:23:59.581 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:23:59.581 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:59.581 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:59.581 #define SPDK_CONFIG_EXAMPLES 1 00:23:59.581 #undef SPDK_CONFIG_FC 00:23:59.581 #define SPDK_CONFIG_FC_PATH 00:23:59.581 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:59.581 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:59.581 #undef SPDK_CONFIG_FUSE 00:23:59.581 #undef SPDK_CONFIG_FUZZER 00:23:59.581 #define SPDK_CONFIG_FUZZER_LIB 00:23:59.581 #undef SPDK_CONFIG_GOLANG 00:23:59.581 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:23:59.581 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:59.581 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:59.581 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:59.581 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:59.581 #define SPDK_CONFIG_IDXD 1 00:23:59.581 #undef SPDK_CONFIG_IDXD_KERNEL 00:23:59.581 #undef SPDK_CONFIG_IPSEC_MB 00:23:59.581 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:59.581 #define SPDK_CONFIG_ISAL 1 00:23:59.581 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:59.581 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:59.581 #define SPDK_CONFIG_LIBDIR 00:23:59.581 #undef SPDK_CONFIG_LTO 00:23:59.581 #define SPDK_CONFIG_MAX_LCORES 00:23:59.581 #define SPDK_CONFIG_NVME_CUSE 1 00:23:59.581 #undef SPDK_CONFIG_OCF 00:23:59.582 #define SPDK_CONFIG_OCF_PATH 00:23:59.582 #define SPDK_CONFIG_OPENSSL_PATH 00:23:59.582 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:59.582 #undef SPDK_CONFIG_PGO_USE 00:23:59.582 #define SPDK_CONFIG_PREFIX /usr/local 00:23:59.582 #define SPDK_CONFIG_RAID5F 1 00:23:59.582 #undef SPDK_CONFIG_RBD 00:23:59.582 #define SPDK_CONFIG_RDMA 1 00:23:59.582 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:59.582 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:59.582 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:59.582 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:59.582 #undef SPDK_CONFIG_SHARED 00:23:59.582 #undef SPDK_CONFIG_SMA 00:23:59.582 #define SPDK_CONFIG_TESTS 1 00:23:59.582 #undef SPDK_CONFIG_TSAN 00:23:59.582 #undef SPDK_CONFIG_UBLK 00:23:59.582 #define SPDK_CONFIG_UBSAN 1 00:23:59.582 #define SPDK_CONFIG_UNIT_TESTS 1 00:23:59.582 #undef SPDK_CONFIG_URING 00:23:59.582 #define SPDK_CONFIG_URING_PATH 00:23:59.582 #undef SPDK_CONFIG_URING_ZNS 00:23:59.582 #undef SPDK_CONFIG_USDT 00:23:59.582 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:59.582 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:59.582 #undef SPDK_CONFIG_VFIO_USER 00:23:59.582 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:59.582 #define SPDK_CONFIG_VHOST 1 00:23:59.582 #define SPDK_CONFIG_VIRTIO 1 00:23:59.582 #undef SPDK_CONFIG_VTUNE 00:23:59.582 #define SPDK_CONFIG_VTUNE_DIR 00:23:59.582 #define SPDK_CONFIG_WERROR 1 00:23:59.582 #define SPDK_CONFIG_WPDK_DIR 00:23:59.582 #undef SPDK_CONFIG_XNVME 00:23:59.582 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:59.582 14:25:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:59.582 14:25:51 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.582 14:25:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.582 14:25:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.582 14:25:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.582 14:25:51 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:59.582 14:25:51 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:59.582 14:25:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:59.582 14:25:51 -- paths/export.sh@5 -- # export PATH 00:23:59.582 14:25:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:59.582 14:25:51 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:59.582 14:25:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:59.582 14:25:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:59.582 14:25:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:59.582 14:25:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:59.582 14:25:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:59.582 14:25:51 -- pm/common@16 -- # TEST_TAG=N/A 00:23:59.582 14:25:51 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:59.582 14:25:51 -- common/autotest_common.sh@52 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:23:59.582 14:25:51 -- common/autotest_common.sh@56 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:59.582 14:25:51 -- common/autotest_common.sh@58 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:23:59.582 14:25:51 -- common/autotest_common.sh@60 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:59.582 14:25:51 -- common/autotest_common.sh@62 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:23:59.582 14:25:51 -- common/autotest_common.sh@64 -- # : 00:23:59.582 14:25:51 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:23:59.582 14:25:51 -- common/autotest_common.sh@66 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:23:59.582 14:25:51 -- common/autotest_common.sh@68 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:23:59.582 14:25:51 -- common/autotest_common.sh@70 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:23:59.582 14:25:51 -- common/autotest_common.sh@72 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:59.582 14:25:51 -- common/autotest_common.sh@74 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:23:59.582 14:25:51 -- common/autotest_common.sh@76 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:23:59.582 14:25:51 -- common/autotest_common.sh@78 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:23:59.582 14:25:51 -- common/autotest_common.sh@80 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:23:59.582 14:25:51 -- common/autotest_common.sh@82 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:23:59.582 14:25:51 -- common/autotest_common.sh@84 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:23:59.582 14:25:51 -- common/autotest_common.sh@86 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:23:59.582 14:25:51 -- common/autotest_common.sh@88 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:23:59.582 14:25:51 -- common/autotest_common.sh@90 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:59.582 14:25:51 -- common/autotest_common.sh@92 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:23:59.582 14:25:51 -- common/autotest_common.sh@94 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:23:59.582 14:25:51 -- common/autotest_common.sh@96 -- # : rdma 00:23:59.582 14:25:51 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:59.582 14:25:51 -- common/autotest_common.sh@98 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:23:59.582 14:25:51 -- common/autotest_common.sh@100 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:23:59.582 14:25:51 -- common/autotest_common.sh@102 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:23:59.582 14:25:51 -- common/autotest_common.sh@104 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:23:59.582 14:25:51 -- common/autotest_common.sh@106 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:23:59.582 14:25:51 -- common/autotest_common.sh@108 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:23:59.582 14:25:51 -- common/autotest_common.sh@110 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:23:59.582 14:25:51 -- common/autotest_common.sh@112 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:59.582 14:25:51 -- common/autotest_common.sh@114 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:23:59.582 14:25:51 -- common/autotest_common.sh@116 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:23:59.582 14:25:51 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:23:59.582 14:25:51 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:59.582 14:25:51 -- common/autotest_common.sh@120 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:23:59.582 14:25:51 -- common/autotest_common.sh@122 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:23:59.582 14:25:51 -- common/autotest_common.sh@124 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:23:59.582 14:25:51 -- common/autotest_common.sh@126 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:23:59.582 14:25:51 -- common/autotest_common.sh@128 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:23:59.582 14:25:51 -- common/autotest_common.sh@130 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:23:59.582 14:25:51 -- common/autotest_common.sh@132 -- # : v22.11.4 00:23:59.582 14:25:51 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:23:59.582 14:25:51 -- common/autotest_common.sh@134 -- # : true 00:23:59.582 14:25:51 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:23:59.582 14:25:51 -- common/autotest_common.sh@136 -- # : 1 00:23:59.582 14:25:51 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:23:59.582 14:25:51 -- common/autotest_common.sh@138 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:23:59.582 14:25:51 -- common/autotest_common.sh@140 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:23:59.582 14:25:51 -- common/autotest_common.sh@142 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:23:59.582 14:25:51 -- common/autotest_common.sh@144 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:23:59.582 14:25:51 -- common/autotest_common.sh@146 -- # : 0 00:23:59.582 14:25:51 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:23:59.582 14:25:51 -- common/autotest_common.sh@148 -- # : 00:23:59.582 14:25:51 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:23:59.583 14:25:51 -- common/autotest_common.sh@150 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:23:59.583 14:25:51 -- common/autotest_common.sh@152 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:23:59.583 14:25:51 -- common/autotest_common.sh@154 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:23:59.583 14:25:51 -- common/autotest_common.sh@156 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:23:59.583 14:25:51 -- common/autotest_common.sh@158 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:23:59.583 14:25:51 -- common/autotest_common.sh@160 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:23:59.583 14:25:51 -- common/autotest_common.sh@163 -- # : 00:23:59.583 14:25:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:23:59.583 14:25:51 -- common/autotest_common.sh@165 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:23:59.583 14:25:51 -- common/autotest_common.sh@167 -- # : 0 00:23:59.583 14:25:51 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:59.583 14:25:51 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:59.583 14:25:51 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:59.583 14:25:51 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:59.583 14:25:51 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:59.583 14:25:51 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:59.583 14:25:51 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:59.583 14:25:51 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:23:59.583 14:25:51 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:59.583 14:25:51 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:59.583 14:25:51 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:59.583 14:25:51 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:59.583 14:25:51 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:59.583 14:25:51 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:23:59.583 14:25:51 -- common/autotest_common.sh@196 -- # cat 00:23:59.583 14:25:51 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:23:59.583 14:25:51 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:59.583 14:25:51 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:59.583 14:25:51 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:59.583 14:25:51 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:59.583 14:25:51 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:23:59.583 14:25:51 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:23:59.583 14:25:51 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:59.583 14:25:51 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:59.583 14:25:51 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:59.583 14:25:51 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:59.583 14:25:51 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:23:59.583 14:25:51 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:23:59.583 14:25:51 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:59.583 14:25:51 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:59.583 14:25:51 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:59.583 14:25:51 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:59.583 14:25:51 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:59.583 14:25:51 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:59.583 14:25:51 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:23:59.583 14:25:51 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:23:59.583 14:25:51 -- common/autotest_common.sh@249 -- # _LCOV= 00:23:59.583 14:25:51 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:23:59.583 14:25:51 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:23:59.583 14:25:51 -- common/autotest_common.sh@255 -- # lcov_opt= 00:23:59.583 14:25:51 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:23:59.583 14:25:51 -- common/autotest_common.sh@259 -- # export valgrind= 00:23:59.583 14:25:51 -- common/autotest_common.sh@259 -- # valgrind= 00:23:59.583 14:25:51 -- common/autotest_common.sh@265 -- # uname -s 00:23:59.583 14:25:51 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:23:59.583 14:25:51 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:23:59.583 14:25:51 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:23:59.583 14:25:51 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:23:59.583 14:25:51 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@275 -- # MAKE=make 00:23:59.583 14:25:51 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:23:59.583 14:25:51 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:23:59.583 14:25:51 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:23:59.583 14:25:51 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:59.583 14:25:51 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:23:59.583 14:25:51 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:23:59.583 14:25:51 -- common/autotest_common.sh@319 -- # [[ -z 142710 ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@319 -- # kill -0 142710 00:23:59.583 14:25:51 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:23:59.583 14:25:51 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:23:59.583 14:25:51 -- common/autotest_common.sh@332 -- # local mount target_dir 00:23:59.583 14:25:51 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:23:59.583 14:25:51 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:23:59.583 14:25:51 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:23:59.583 14:25:51 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:23:59.583 14:25:51 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.44c3lZ 00:23:59.583 14:25:51 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:59.583 14:25:51 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:23:59.583 14:25:51 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.44c3lZ/tests/interrupt /tmp/spdk.44c3lZ 00:23:59.583 14:25:51 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:23:59.583 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.583 14:25:51 -- common/autotest_common.sh@328 -- # df -T 00:23:59.583 14:25:51 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:23:59.583 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:59.583 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:59.583 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=1248956416 00:23:59.583 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253683200 00:23:59.583 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=4726784 00:23:59.583 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.583 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:23:59.583 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:23:59.583 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=9441292288 00:23:59.583 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20616794112 00:23:59.583 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=11158724608 00:23:59.583 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.583 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:59.583 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:59.584 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267142144 00:23:59.584 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6268399616 00:23:59.584 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:23:59.584 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.584 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:59.584 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:59.584 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:23:59.584 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:23:59.584 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:23:59.584 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.584 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:23:59.584 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:23:59.584 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=103061504 00:23:59.843 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:23:59.843 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=6334464 00:23:59.843 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.843 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:23:59.843 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:23:59.843 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253675008 00:23:59.843 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253679104 00:23:59.843 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=4096 00:23:59.843 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.843 14:25:51 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:23:59.843 14:25:51 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:23:59.843 14:25:51 -- common/autotest_common.sh@363 -- # avails["$mount"]=98262614016 00:23:59.843 14:25:51 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:23:59.843 14:25:51 -- common/autotest_common.sh@364 -- # uses["$mount"]=1440165888 00:23:59.843 14:25:51 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:23:59.843 14:25:51 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:23:59.843 * Looking for test storage... 00:23:59.843 14:25:51 -- common/autotest_common.sh@369 -- # local target_space new_size 00:23:59.843 14:25:51 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:23:59.844 14:25:51 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.844 14:25:51 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:59.844 14:25:51 -- common/autotest_common.sh@373 -- # mount=/ 00:23:59.844 14:25:51 -- common/autotest_common.sh@375 -- # target_space=9441292288 00:23:59.844 14:25:51 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:23:59.844 14:25:51 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:23:59.844 14:25:51 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:23:59.844 14:25:51 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:23:59.844 14:25:51 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:23:59.844 14:25:51 -- common/autotest_common.sh@382 -- # new_size=13373317120 00:23:59.844 14:25:51 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:59.844 14:25:51 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.844 14:25:51 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.844 14:25:51 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:59.844 14:25:51 -- common/autotest_common.sh@390 -- # return 0 00:23:59.844 14:25:51 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:23:59.844 14:25:51 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:23:59.844 14:25:51 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:59.844 14:25:51 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:59.844 14:25:51 -- common/autotest_common.sh@1682 -- # true 00:23:59.844 14:25:51 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:23:59.844 14:25:51 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:59.844 14:25:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:59.844 14:25:51 -- common/autotest_common.sh@27 -- # exec 00:23:59.844 14:25:51 -- common/autotest_common.sh@29 -- # exec 00:23:59.844 14:25:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:59.844 14:25:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:59.844 14:25:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:59.844 14:25:51 -- common/autotest_common.sh@18 -- # set -x 00:23:59.844 14:25:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:59.844 14:25:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:59.844 14:25:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:59.844 14:25:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:59.844 14:25:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:59.844 14:25:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:59.844 14:25:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:59.844 14:25:51 -- scripts/common.sh@335 -- # IFS=.-: 00:23:59.844 14:25:51 -- scripts/common.sh@335 -- # read -ra ver1 00:23:59.844 14:25:51 -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.844 14:25:51 -- scripts/common.sh@336 -- # read -ra ver2 00:23:59.844 14:25:51 -- scripts/common.sh@337 -- # local 'op=<' 00:23:59.844 14:25:51 -- scripts/common.sh@339 -- # ver1_l=2 00:23:59.844 14:25:51 -- scripts/common.sh@340 -- # ver2_l=1 00:23:59.844 14:25:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:59.844 14:25:51 -- scripts/common.sh@343 -- # case "$op" in 00:23:59.844 14:25:51 -- scripts/common.sh@344 -- # : 1 00:23:59.844 14:25:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:59.844 14:25:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.844 14:25:51 -- scripts/common.sh@364 -- # decimal 1 00:23:59.844 14:25:51 -- scripts/common.sh@352 -- # local d=1 00:23:59.844 14:25:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.844 14:25:51 -- scripts/common.sh@354 -- # echo 1 00:23:59.844 14:25:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:59.844 14:25:51 -- scripts/common.sh@365 -- # decimal 2 00:23:59.844 14:25:51 -- scripts/common.sh@352 -- # local d=2 00:23:59.844 14:25:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.844 14:25:51 -- scripts/common.sh@354 -- # echo 2 00:23:59.844 14:25:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:59.844 14:25:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:59.844 14:25:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:59.844 14:25:51 -- scripts/common.sh@367 -- # return 0 00:23:59.844 14:25:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.844 14:25:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.844 --rc genhtml_branch_coverage=1 00:23:59.844 --rc genhtml_function_coverage=1 00:23:59.844 --rc genhtml_legend=1 00:23:59.844 --rc geninfo_all_blocks=1 00:23:59.844 --rc geninfo_unexecuted_blocks=1 00:23:59.844 00:23:59.844 ' 00:23:59.844 14:25:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.844 --rc genhtml_branch_coverage=1 00:23:59.844 --rc genhtml_function_coverage=1 00:23:59.844 --rc genhtml_legend=1 00:23:59.844 --rc geninfo_all_blocks=1 00:23:59.844 --rc geninfo_unexecuted_blocks=1 00:23:59.844 00:23:59.844 ' 00:23:59.844 14:25:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.844 --rc genhtml_branch_coverage=1 00:23:59.844 --rc genhtml_function_coverage=1 00:23:59.844 --rc genhtml_legend=1 00:23:59.844 --rc geninfo_all_blocks=1 00:23:59.844 --rc geninfo_unexecuted_blocks=1 00:23:59.844 00:23:59.844 ' 00:23:59.844 14:25:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:59.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.844 --rc genhtml_branch_coverage=1 00:23:59.844 --rc genhtml_function_coverage=1 00:23:59.844 --rc genhtml_legend=1 00:23:59.844 --rc geninfo_all_blocks=1 00:23:59.844 --rc geninfo_unexecuted_blocks=1 00:23:59.844 00:23:59.844 ' 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:59.844 14:25:51 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:59.844 14:25:51 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:59.844 14:25:51 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=142782 00:23:59.844 14:25:51 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:59.845 14:25:51 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:59.845 14:25:51 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 142782 /var/tmp/spdk.sock 00:23:59.845 14:25:51 -- common/autotest_common.sh@829 -- # '[' -z 142782 ']' 00:23:59.845 14:25:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.845 14:25:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.845 14:25:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.845 14:25:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.845 14:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.845 [2024-11-18 14:25:51.810595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:59.845 [2024-11-18 14:25:51.811817] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142782 ] 00:24:00.103 [2024-11-18 14:25:51.967152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.103 [2024-11-18 14:25:52.026817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.103 [2024-11-18 14:25:52.026962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.103 [2024-11-18 14:25:52.026956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.103 [2024-11-18 14:25:52.105798] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:01.040 14:25:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.040 14:25:52 -- common/autotest_common.sh@862 -- # return 0 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:01.040 14:25:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.040 14:25:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.040 14:25:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:01.040 "name": "app_thread", 00:24:01.040 "id": 1, 00:24:01.040 "active_pollers": [], 00:24:01.040 "timed_pollers": [ 00:24:01.040 { 00:24:01.040 "name": "rpc_subsystem_poll", 00:24:01.040 "id": 1, 00:24:01.040 "state": "waiting", 00:24:01.040 "run_count": 0, 00:24:01.040 "busy_count": 0, 00:24:01.040 "period_ticks": 8800000 00:24:01.040 } 00:24:01.040 ], 00:24:01.040 "paused_pollers": [] 00:24:01.040 }' 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:24:01.040 14:25:52 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:01.040 14:25:52 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:01.040 14:25:52 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:01.040 14:25:52 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:01.040 5000+0 records in 00:24:01.040 5000+0 records out 00:24:01.040 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0277023 s, 370 MB/s 00:24:01.040 14:25:52 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:01.299 AIO0 00:24:01.299 14:25:53 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:01.558 14:25:53 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:01.558 14:25:53 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:01.558 14:25:53 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:01.558 14:25:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.558 14:25:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.558 14:25:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.558 14:25:53 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:01.558 "name": "app_thread", 00:24:01.558 "id": 1, 00:24:01.558 "active_pollers": [], 00:24:01.558 "timed_pollers": [ 00:24:01.558 { 00:24:01.558 "name": "rpc_subsystem_poll", 00:24:01.558 "id": 1, 00:24:01.558 "state": "waiting", 00:24:01.558 "run_count": 0, 00:24:01.558 "busy_count": 0, 00:24:01.558 "period_ticks": 8800000 00:24:01.558 } 00:24:01.558 ], 00:24:01.558 "paused_pollers": [] 00:24:01.558 }' 00:24:01.817 14:25:53 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:01.817 14:25:53 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:01.818 14:25:53 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:01.818 14:25:53 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:01.818 14:25:53 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:24:01.818 14:25:53 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:24:01.818 14:25:53 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:01.818 14:25:53 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 142782 00:24:01.818 14:25:53 -- common/autotest_common.sh@936 -- # '[' -z 142782 ']' 00:24:01.818 14:25:53 -- common/autotest_common.sh@940 -- # kill -0 142782 00:24:01.818 14:25:53 -- common/autotest_common.sh@941 -- # uname 00:24:01.818 14:25:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:01.818 14:25:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 142782 00:24:01.818 14:25:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:01.818 14:25:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:01.818 14:25:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 142782' 00:24:01.818 killing process with pid 142782 00:24:01.818 14:25:53 -- common/autotest_common.sh@955 -- # kill 142782 00:24:01.818 14:25:53 -- common/autotest_common.sh@960 -- # wait 142782 00:24:02.077 14:25:54 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:02.077 14:25:54 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:02.077 ************************************ 00:24:02.077 END TEST reap_unregistered_poller 00:24:02.077 ************************************ 00:24:02.077 00:24:02.077 real 0m2.754s 00:24:02.077 user 0m1.851s 00:24:02.077 sys 0m0.514s 00:24:02.077 14:25:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:02.077 14:25:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.077 14:25:54 -- spdk/autotest.sh@191 -- # uname -s 00:24:02.077 14:25:54 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:24:02.077 14:25:54 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:24:02.077 14:25:54 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:24:02.077 14:25:54 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:02.077 14:25:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:02.077 14:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:02.077 14:25:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.077 ************************************ 00:24:02.077 START TEST spdk_dd 00:24:02.077 ************************************ 00:24:02.077 14:25:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:02.336 * Looking for test storage... 00:24:02.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:02.336 14:25:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:02.336 14:25:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:02.336 14:25:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:02.336 14:25:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:02.336 14:25:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:02.336 14:25:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:02.336 14:25:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:02.336 14:25:54 -- scripts/common.sh@335 -- # IFS=.-: 00:24:02.336 14:25:54 -- scripts/common.sh@335 -- # read -ra ver1 00:24:02.336 14:25:54 -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.336 14:25:54 -- scripts/common.sh@336 -- # read -ra ver2 00:24:02.336 14:25:54 -- scripts/common.sh@337 -- # local 'op=<' 00:24:02.336 14:25:54 -- scripts/common.sh@339 -- # ver1_l=2 00:24:02.336 14:25:54 -- scripts/common.sh@340 -- # ver2_l=1 00:24:02.336 14:25:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:02.336 14:25:54 -- scripts/common.sh@343 -- # case "$op" in 00:24:02.336 14:25:54 -- scripts/common.sh@344 -- # : 1 00:24:02.336 14:25:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:02.336 14:25:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.336 14:25:54 -- scripts/common.sh@364 -- # decimal 1 00:24:02.336 14:25:54 -- scripts/common.sh@352 -- # local d=1 00:24:02.336 14:25:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.336 14:25:54 -- scripts/common.sh@354 -- # echo 1 00:24:02.336 14:25:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:02.336 14:25:54 -- scripts/common.sh@365 -- # decimal 2 00:24:02.336 14:25:54 -- scripts/common.sh@352 -- # local d=2 00:24:02.336 14:25:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.336 14:25:54 -- scripts/common.sh@354 -- # echo 2 00:24:02.336 14:25:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:02.336 14:25:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:02.336 14:25:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:02.336 14:25:54 -- scripts/common.sh@367 -- # return 0 00:24:02.336 14:25:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.336 14:25:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.336 --rc genhtml_branch_coverage=1 00:24:02.336 --rc genhtml_function_coverage=1 00:24:02.336 --rc genhtml_legend=1 00:24:02.336 --rc geninfo_all_blocks=1 00:24:02.336 --rc geninfo_unexecuted_blocks=1 00:24:02.336 00:24:02.336 ' 00:24:02.336 14:25:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.336 --rc genhtml_branch_coverage=1 00:24:02.336 --rc genhtml_function_coverage=1 00:24:02.336 --rc genhtml_legend=1 00:24:02.336 --rc geninfo_all_blocks=1 00:24:02.336 --rc geninfo_unexecuted_blocks=1 00:24:02.336 00:24:02.336 ' 00:24:02.336 14:25:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.336 --rc genhtml_branch_coverage=1 00:24:02.336 --rc genhtml_function_coverage=1 00:24:02.336 --rc genhtml_legend=1 00:24:02.336 --rc geninfo_all_blocks=1 00:24:02.337 --rc geninfo_unexecuted_blocks=1 00:24:02.337 00:24:02.337 ' 00:24:02.337 14:25:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.337 --rc genhtml_branch_coverage=1 00:24:02.337 --rc genhtml_function_coverage=1 00:24:02.337 --rc genhtml_legend=1 00:24:02.337 --rc geninfo_all_blocks=1 00:24:02.337 --rc geninfo_unexecuted_blocks=1 00:24:02.337 00:24:02.337 ' 00:24:02.337 14:25:54 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.337 14:25:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.337 14:25:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.337 14:25:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.337 14:25:54 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.337 14:25:54 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.337 14:25:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.337 14:25:54 -- paths/export.sh@5 -- # export PATH 00:24:02.337 14:25:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.337 14:25:54 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:02.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:24:02.596 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:03.974 14:25:55 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:24:03.974 14:25:55 -- dd/dd.sh@11 -- # nvme_in_userspace 00:24:03.974 14:25:55 -- scripts/common.sh@311 -- # local bdf bdfs 00:24:03.974 14:25:55 -- scripts/common.sh@312 -- # local nvmes 00:24:03.974 14:25:55 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:24:03.974 14:25:55 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:03.974 14:25:55 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:24:03.974 14:25:55 -- scripts/common.sh@297 -- # local bdf= 00:24:03.974 14:25:55 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:24:03.974 14:25:55 -- scripts/common.sh@232 -- # local class 00:24:03.974 14:25:55 -- scripts/common.sh@233 -- # local subclass 00:24:03.974 14:25:55 -- scripts/common.sh@234 -- # local progif 00:24:03.974 14:25:55 -- scripts/common.sh@235 -- # printf %02x 1 00:24:03.974 14:25:55 -- scripts/common.sh@235 -- # class=01 00:24:03.974 14:25:55 -- scripts/common.sh@236 -- # printf %02x 8 00:24:03.974 14:25:55 -- scripts/common.sh@236 -- # subclass=08 00:24:03.974 14:25:55 -- scripts/common.sh@237 -- # printf %02x 2 00:24:03.974 14:25:55 -- scripts/common.sh@237 -- # progif=02 00:24:03.974 14:25:55 -- scripts/common.sh@239 -- # hash lspci 00:24:03.974 14:25:55 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:24:03.974 14:25:55 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:24:03.974 14:25:55 -- scripts/common.sh@242 -- # grep -i -- -p02 00:24:03.974 14:25:55 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:03.974 14:25:55 -- scripts/common.sh@244 -- # tr -d '"' 00:24:03.974 14:25:55 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:03.974 14:25:55 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:24:03.974 14:25:55 -- scripts/common.sh@15 -- # local i 00:24:03.974 14:25:55 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:24:03.974 14:25:55 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:03.974 14:25:55 -- scripts/common.sh@24 -- # return 0 00:24:03.974 14:25:55 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:24:03.974 14:25:55 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:24:03.974 14:25:55 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:24:03.974 14:25:55 -- scripts/common.sh@322 -- # uname -s 00:24:03.974 14:25:55 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:24:03.974 14:25:55 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:24:03.974 14:25:55 -- scripts/common.sh@327 -- # (( 1 )) 00:24:03.974 14:25:55 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:24:03.974 14:25:55 -- dd/dd.sh@13 -- # check_liburing 00:24:03.974 14:25:55 -- dd/common.sh@139 -- # local lib so 00:24:03.974 14:25:55 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:24:03.974 14:25:55 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:24:03.974 14:25:55 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:03.974 14:25:55 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:24:03.974 14:25:55 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:03.974 14:25:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:03.974 14:25:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:03.974 14:25:55 -- common/autotest_common.sh@10 -- # set +x 00:24:03.974 ************************************ 00:24:03.974 START TEST spdk_dd_basic_rw 00:24:03.974 ************************************ 00:24:03.974 14:25:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:03.974 * Looking for test storage... 00:24:03.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:03.975 14:25:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:03.975 14:25:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:03.975 14:25:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:03.975 14:25:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:03.975 14:25:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:03.975 14:25:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:03.975 14:25:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:03.975 14:25:55 -- scripts/common.sh@335 -- # IFS=.-: 00:24:03.975 14:25:55 -- scripts/common.sh@335 -- # read -ra ver1 00:24:03.975 14:25:55 -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.975 14:25:55 -- scripts/common.sh@336 -- # read -ra ver2 00:24:03.975 14:25:55 -- scripts/common.sh@337 -- # local 'op=<' 00:24:03.975 14:25:55 -- scripts/common.sh@339 -- # ver1_l=2 00:24:03.975 14:25:55 -- scripts/common.sh@340 -- # ver2_l=1 00:24:03.975 14:25:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:03.975 14:25:55 -- scripts/common.sh@343 -- # case "$op" in 00:24:03.975 14:25:55 -- scripts/common.sh@344 -- # : 1 00:24:03.975 14:25:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:03.975 14:25:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.975 14:25:55 -- scripts/common.sh@364 -- # decimal 1 00:24:03.975 14:25:55 -- scripts/common.sh@352 -- # local d=1 00:24:03.975 14:25:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.975 14:25:55 -- scripts/common.sh@354 -- # echo 1 00:24:03.975 14:25:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:03.975 14:25:55 -- scripts/common.sh@365 -- # decimal 2 00:24:03.975 14:25:55 -- scripts/common.sh@352 -- # local d=2 00:24:03.975 14:25:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.975 14:25:55 -- scripts/common.sh@354 -- # echo 2 00:24:03.975 14:25:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:03.975 14:25:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:03.975 14:25:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:03.975 14:25:55 -- scripts/common.sh@367 -- # return 0 00:24:03.975 14:25:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.975 14:25:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:03.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.975 --rc genhtml_branch_coverage=1 00:24:03.975 --rc genhtml_function_coverage=1 00:24:03.975 --rc genhtml_legend=1 00:24:03.975 --rc geninfo_all_blocks=1 00:24:03.975 --rc geninfo_unexecuted_blocks=1 00:24:03.975 00:24:03.975 ' 00:24:03.975 14:25:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:03.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.975 --rc genhtml_branch_coverage=1 00:24:03.975 --rc genhtml_function_coverage=1 00:24:03.975 --rc genhtml_legend=1 00:24:03.975 --rc geninfo_all_blocks=1 00:24:03.975 --rc geninfo_unexecuted_blocks=1 00:24:03.975 00:24:03.975 ' 00:24:03.975 14:25:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:03.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.975 --rc genhtml_branch_coverage=1 00:24:03.975 --rc genhtml_function_coverage=1 00:24:03.975 --rc genhtml_legend=1 00:24:03.975 --rc geninfo_all_blocks=1 00:24:03.975 --rc geninfo_unexecuted_blocks=1 00:24:03.975 00:24:03.975 ' 00:24:03.975 14:25:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:03.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.975 --rc genhtml_branch_coverage=1 00:24:03.975 --rc genhtml_function_coverage=1 00:24:03.975 --rc genhtml_legend=1 00:24:03.975 --rc geninfo_all_blocks=1 00:24:03.975 --rc geninfo_unexecuted_blocks=1 00:24:03.975 00:24:03.975 ' 00:24:03.975 14:25:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.975 14:25:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.975 14:25:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.975 14:25:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.975 14:25:55 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:03.975 14:25:55 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:03.975 14:25:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:03.975 14:25:55 -- paths/export.sh@5 -- # export PATH 00:24:03.975 14:25:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:03.975 14:25:55 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:24:03.975 14:25:55 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:24:03.975 14:25:55 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:24:03.975 14:25:55 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:24:03.975 14:25:55 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:24:03.975 14:25:55 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:03.975 14:25:55 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:24:03.975 14:25:55 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:03.975 14:25:55 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:03.975 14:25:55 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:24:03.975 14:25:55 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:24:03.975 14:25:55 -- dd/common.sh@126 -- # mapfile -t id 00:24:03.975 14:25:55 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:24:04.237 14:25:56 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2330 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:24:04.237 14:25:56 -- dd/common.sh@130 -- # lbaf=04 00:24:04.238 14:25:56 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 111 Data Units Written: 7 Host Read Commands: 2330 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:24:04.238 14:25:56 -- dd/common.sh@132 -- # lbaf=4096 00:24:04.238 14:25:56 -- dd/common.sh@134 -- # echo 4096 00:24:04.238 14:25:56 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:24:04.238 14:25:56 -- dd/basic_rw.sh@96 -- # : 00:24:04.238 14:25:56 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:04.238 14:25:56 -- dd/basic_rw.sh@96 -- # gen_conf 00:24:04.238 14:25:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:24:04.238 14:25:56 -- dd/common.sh@31 -- # xtrace_disable 00:24:04.238 14:25:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.238 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.238 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.238 ************************************ 00:24:04.238 START TEST dd_bs_lt_native_bs 00:24:04.238 ************************************ 00:24:04.238 14:25:56 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:04.238 14:25:56 -- common/autotest_common.sh@650 -- # local es=0 00:24:04.238 14:25:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:04.238 14:25:56 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.238 14:25:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.238 14:25:56 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.238 14:25:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.238 14:25:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.238 14:25:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.238 14:25:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.238 14:25:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:04.238 14:25:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:04.238 { 00:24:04.238 "subsystems": [ 00:24:04.238 { 00:24:04.238 "subsystem": "bdev", 00:24:04.238 "config": [ 00:24:04.238 { 00:24:04.238 "params": { 00:24:04.238 "trtype": "pcie", 00:24:04.238 "traddr": "0000:00:06.0", 00:24:04.238 "name": "Nvme0" 00:24:04.238 }, 00:24:04.238 "method": "bdev_nvme_attach_controller" 00:24:04.238 }, 00:24:04.238 { 00:24:04.238 "method": "bdev_wait_for_examine" 00:24:04.238 } 00:24:04.238 ] 00:24:04.238 } 00:24:04.238 ] 00:24:04.238 } 00:24:04.238 [2024-11-18 14:25:56.245782] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:04.238 [2024-11-18 14:25:56.246009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143093 ] 00:24:04.498 [2024-11-18 14:25:56.394275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.498 [2024-11-18 14:25:56.487309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.757 [2024-11-18 14:25:56.657182] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:24:04.757 [2024-11-18 14:25:56.657302] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:04.757 [2024-11-18 14:25:56.779860] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:05.016 14:25:56 -- common/autotest_common.sh@653 -- # es=234 00:24:05.016 14:25:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:05.016 14:25:56 -- common/autotest_common.sh@662 -- # es=106 00:24:05.016 14:25:56 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:05.016 14:25:56 -- common/autotest_common.sh@670 -- # es=1 00:24:05.016 14:25:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:05.016 00:24:05.016 real 0m0.712s 00:24:05.016 user 0m0.423s 00:24:05.016 sys 0m0.252s 00:24:05.016 14:25:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:05.017 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.017 ************************************ 00:24:05.017 END TEST dd_bs_lt_native_bs 00:24:05.017 ************************************ 00:24:05.017 14:25:56 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:24:05.017 14:25:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:05.017 14:25:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:05.017 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.017 ************************************ 00:24:05.017 START TEST dd_rw 00:24:05.017 ************************************ 00:24:05.017 14:25:56 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:24:05.017 14:25:56 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:24:05.017 14:25:56 -- dd/basic_rw.sh@12 -- # local count size 00:24:05.017 14:25:56 -- dd/basic_rw.sh@13 -- # local qds bss 00:24:05.017 14:25:56 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:24:05.017 14:25:56 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:05.017 14:25:56 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:05.017 14:25:56 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:05.017 14:25:56 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:05.017 14:25:56 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:05.017 14:25:56 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:05.017 14:25:56 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:05.017 14:25:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:05.017 14:25:56 -- dd/basic_rw.sh@23 -- # count=15 00:24:05.017 14:25:56 -- dd/basic_rw.sh@24 -- # count=15 00:24:05.017 14:25:56 -- dd/basic_rw.sh@25 -- # size=61440 00:24:05.017 14:25:56 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:24:05.017 14:25:56 -- dd/common.sh@98 -- # xtrace_disable 00:24:05.017 14:25:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.584 14:25:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:24:05.584 14:25:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:05.584 14:25:57 -- dd/common.sh@31 -- # xtrace_disable 00:24:05.584 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.584 [2024-11-18 14:25:57.534820] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:05.584 [2024-11-18 14:25:57.535065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143134 ] 00:24:05.584 { 00:24:05.584 "subsystems": [ 00:24:05.584 { 00:24:05.584 "subsystem": "bdev", 00:24:05.584 "config": [ 00:24:05.584 { 00:24:05.584 "params": { 00:24:05.584 "trtype": "pcie", 00:24:05.584 "traddr": "0000:00:06.0", 00:24:05.584 "name": "Nvme0" 00:24:05.584 }, 00:24:05.584 "method": "bdev_nvme_attach_controller" 00:24:05.584 }, 00:24:05.584 { 00:24:05.584 "method": "bdev_wait_for_examine" 00:24:05.584 } 00:24:05.584 ] 00:24:05.584 } 00:24:05.584 ] 00:24:05.584 } 00:24:05.843 [2024-11-18 14:25:57.682045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.843 [2024-11-18 14:25:57.753026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.843  [2024-11-18T14:25:58.183Z] Copying: 60/60 [kB] (average 14 MBps) 00:24:06.109 00:24:06.396 14:25:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:06.396 14:25:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:24:06.396 14:25:58 -- dd/common.sh@31 -- # xtrace_disable 00:24:06.396 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.396 [2024-11-18 14:25:58.242401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:06.396 [2024-11-18 14:25:58.242633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143158 ] 00:24:06.396 { 00:24:06.396 "subsystems": [ 00:24:06.396 { 00:24:06.396 "subsystem": "bdev", 00:24:06.396 "config": [ 00:24:06.396 { 00:24:06.396 "params": { 00:24:06.396 "trtype": "pcie", 00:24:06.396 "traddr": "0000:00:06.0", 00:24:06.396 "name": "Nvme0" 00:24:06.396 }, 00:24:06.396 "method": "bdev_nvme_attach_controller" 00:24:06.396 }, 00:24:06.396 { 00:24:06.396 "method": "bdev_wait_for_examine" 00:24:06.396 } 00:24:06.396 ] 00:24:06.396 } 00:24:06.396 ] 00:24:06.396 } 00:24:06.396 [2024-11-18 14:25:58.389463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.396 [2024-11-18 14:25:58.460880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.669  [2024-11-18T14:25:59.002Z] Copying: 60/60 [kB] (average 19 MBps) 00:24:06.928 00:24:06.928 14:25:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:06.928 14:25:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:24:06.928 14:25:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:06.928 14:25:58 -- dd/common.sh@11 -- # local nvme_ref= 00:24:06.928 14:25:58 -- dd/common.sh@12 -- # local size=61440 00:24:06.928 14:25:58 -- dd/common.sh@14 -- # local bs=1048576 00:24:06.928 14:25:58 -- dd/common.sh@15 -- # local count=1 00:24:06.928 14:25:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:06.928 14:25:58 -- dd/common.sh@18 -- # gen_conf 00:24:06.928 14:25:58 -- dd/common.sh@31 -- # xtrace_disable 00:24:06.928 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.928 [2024-11-18 14:25:58.951716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:06.928 [2024-11-18 14:25:58.951929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143178 ] 00:24:06.928 { 00:24:06.928 "subsystems": [ 00:24:06.928 { 00:24:06.928 "subsystem": "bdev", 00:24:06.928 "config": [ 00:24:06.928 { 00:24:06.928 "params": { 00:24:06.928 "trtype": "pcie", 00:24:06.928 "traddr": "0000:00:06.0", 00:24:06.928 "name": "Nvme0" 00:24:06.928 }, 00:24:06.928 "method": "bdev_nvme_attach_controller" 00:24:06.928 }, 00:24:06.928 { 00:24:06.928 "method": "bdev_wait_for_examine" 00:24:06.928 } 00:24:06.928 ] 00:24:06.928 } 00:24:06.928 ] 00:24:06.928 } 00:24:07.195 [2024-11-18 14:25:59.090558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.195 [2024-11-18 14:25:59.151302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.457  [2024-11-18T14:25:59.790Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:24:07.716 00:24:07.716 14:25:59 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:07.716 14:25:59 -- dd/basic_rw.sh@23 -- # count=15 00:24:07.716 14:25:59 -- dd/basic_rw.sh@24 -- # count=15 00:24:07.716 14:25:59 -- dd/basic_rw.sh@25 -- # size=61440 00:24:07.716 14:25:59 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:24:07.716 14:25:59 -- dd/common.sh@98 -- # xtrace_disable 00:24:07.716 14:25:59 -- common/autotest_common.sh@10 -- # set +x 00:24:08.283 14:26:00 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:24:08.283 14:26:00 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:08.283 14:26:00 -- dd/common.sh@31 -- # xtrace_disable 00:24:08.283 14:26:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.283 [2024-11-18 14:26:00.157783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:08.283 [2024-11-18 14:26:00.157989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143194 ] 00:24:08.283 { 00:24:08.283 "subsystems": [ 00:24:08.283 { 00:24:08.283 "subsystem": "bdev", 00:24:08.283 "config": [ 00:24:08.283 { 00:24:08.283 "params": { 00:24:08.283 "trtype": "pcie", 00:24:08.283 "traddr": "0000:00:06.0", 00:24:08.283 "name": "Nvme0" 00:24:08.283 }, 00:24:08.283 "method": "bdev_nvme_attach_controller" 00:24:08.283 }, 00:24:08.283 { 00:24:08.283 "method": "bdev_wait_for_examine" 00:24:08.283 } 00:24:08.283 ] 00:24:08.283 } 00:24:08.283 ] 00:24:08.283 } 00:24:08.283 [2024-11-18 14:26:00.295455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.542 [2024-11-18 14:26:00.365427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.542  [2024-11-18T14:26:00.875Z] Copying: 60/60 [kB] (average 58 MBps) 00:24:08.801 00:24:08.801 14:26:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:24:08.801 14:26:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:08.801 14:26:00 -- dd/common.sh@31 -- # xtrace_disable 00:24:08.801 14:26:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.801 [2024-11-18 14:26:00.849247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:08.801 [2024-11-18 14:26:00.849706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143207 ] 00:24:08.801 { 00:24:08.801 "subsystems": [ 00:24:08.801 { 00:24:08.801 "subsystem": "bdev", 00:24:08.801 "config": [ 00:24:08.801 { 00:24:08.801 "params": { 00:24:08.801 "trtype": "pcie", 00:24:08.801 "traddr": "0000:00:06.0", 00:24:08.801 "name": "Nvme0" 00:24:08.801 }, 00:24:08.801 "method": "bdev_nvme_attach_controller" 00:24:08.801 }, 00:24:08.801 { 00:24:08.801 "method": "bdev_wait_for_examine" 00:24:08.801 } 00:24:08.801 ] 00:24:08.801 } 00:24:08.801 ] 00:24:08.801 } 00:24:09.060 [2024-11-18 14:26:01.000365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.060 [2024-11-18 14:26:01.078792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.319  [2024-11-18T14:26:01.652Z] Copying: 60/60 [kB] (average 58 MBps) 00:24:09.578 00:24:09.578 14:26:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:09.578 14:26:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:24:09.578 14:26:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:09.578 14:26:01 -- dd/common.sh@11 -- # local nvme_ref= 00:24:09.578 14:26:01 -- dd/common.sh@12 -- # local size=61440 00:24:09.578 14:26:01 -- dd/common.sh@14 -- # local bs=1048576 00:24:09.578 14:26:01 -- dd/common.sh@15 -- # local count=1 00:24:09.578 14:26:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:09.578 14:26:01 -- dd/common.sh@18 -- # gen_conf 00:24:09.578 14:26:01 -- dd/common.sh@31 -- # xtrace_disable 00:24:09.578 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:24:09.578 { 00:24:09.578 "subsystems": [ 00:24:09.578 { 00:24:09.578 "subsystem": "bdev", 00:24:09.578 "config": [ 00:24:09.578 { 00:24:09.578 "params": { 00:24:09.578 "trtype": "pcie", 00:24:09.578 "traddr": "0000:00:06.0", 00:24:09.578 "name": "Nvme0" 00:24:09.578 }, 00:24:09.578 "method": "bdev_nvme_attach_controller" 00:24:09.578 }, 00:24:09.578 { 00:24:09.578 "method": "bdev_wait_for_examine" 00:24:09.578 } 00:24:09.578 ] 00:24:09.578 } 00:24:09.578 ] 00:24:09.578 } 00:24:09.578 [2024-11-18 14:26:01.584894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:09.578 [2024-11-18 14:26:01.585270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143227 ] 00:24:09.837 [2024-11-18 14:26:01.731916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.837 [2024-11-18 14:26:01.794416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.095  [2024-11-18T14:26:02.428Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:24:10.354 00:24:10.354 14:26:02 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:10.354 14:26:02 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:10.354 14:26:02 -- dd/basic_rw.sh@23 -- # count=7 00:24:10.354 14:26:02 -- dd/basic_rw.sh@24 -- # count=7 00:24:10.354 14:26:02 -- dd/basic_rw.sh@25 -- # size=57344 00:24:10.354 14:26:02 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:24:10.354 14:26:02 -- dd/common.sh@98 -- # xtrace_disable 00:24:10.354 14:26:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.923 14:26:02 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:24:10.923 14:26:02 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:10.923 14:26:02 -- dd/common.sh@31 -- # xtrace_disable 00:24:10.923 14:26:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.923 [2024-11-18 14:26:02.782413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:10.923 [2024-11-18 14:26:02.782763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143247 ] 00:24:10.923 { 00:24:10.923 "subsystems": [ 00:24:10.923 { 00:24:10.923 "subsystem": "bdev", 00:24:10.923 "config": [ 00:24:10.923 { 00:24:10.923 "params": { 00:24:10.923 "trtype": "pcie", 00:24:10.923 "traddr": "0000:00:06.0", 00:24:10.923 "name": "Nvme0" 00:24:10.923 }, 00:24:10.923 "method": "bdev_nvme_attach_controller" 00:24:10.923 }, 00:24:10.923 { 00:24:10.923 "method": "bdev_wait_for_examine" 00:24:10.923 } 00:24:10.923 ] 00:24:10.923 } 00:24:10.923 ] 00:24:10.923 } 00:24:10.923 [2024-11-18 14:26:02.927893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.923 [2024-11-18 14:26:02.990298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.182  [2024-11-18T14:26:03.515Z] Copying: 56/56 [kB] (average 54 MBps) 00:24:11.441 00:24:11.441 14:26:03 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:24:11.441 14:26:03 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:11.441 14:26:03 -- dd/common.sh@31 -- # xtrace_disable 00:24:11.441 14:26:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.441 [2024-11-18 14:26:03.477726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:11.441 [2024-11-18 14:26:03.478375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143263 ] 00:24:11.441 { 00:24:11.441 "subsystems": [ 00:24:11.441 { 00:24:11.441 "subsystem": "bdev", 00:24:11.441 "config": [ 00:24:11.441 { 00:24:11.441 "params": { 00:24:11.441 "trtype": "pcie", 00:24:11.441 "traddr": "0000:00:06.0", 00:24:11.441 "name": "Nvme0" 00:24:11.441 }, 00:24:11.441 "method": "bdev_nvme_attach_controller" 00:24:11.441 }, 00:24:11.441 { 00:24:11.441 "method": "bdev_wait_for_examine" 00:24:11.441 } 00:24:11.441 ] 00:24:11.441 } 00:24:11.441 ] 00:24:11.441 } 00:24:11.700 [2024-11-18 14:26:03.624897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.700 [2024-11-18 14:26:03.685884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.959  [2024-11-18T14:26:04.292Z] Copying: 56/56 [kB] (average 27 MBps) 00:24:12.218 00:24:12.218 14:26:04 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:12.218 14:26:04 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:24:12.218 14:26:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:12.218 14:26:04 -- dd/common.sh@11 -- # local nvme_ref= 00:24:12.218 14:26:04 -- dd/common.sh@12 -- # local size=57344 00:24:12.218 14:26:04 -- dd/common.sh@14 -- # local bs=1048576 00:24:12.218 14:26:04 -- dd/common.sh@15 -- # local count=1 00:24:12.218 14:26:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:12.218 14:26:04 -- dd/common.sh@18 -- # gen_conf 00:24:12.219 14:26:04 -- dd/common.sh@31 -- # xtrace_disable 00:24:12.219 14:26:04 -- common/autotest_common.sh@10 -- # set +x 00:24:12.219 [2024-11-18 14:26:04.181265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:12.219 [2024-11-18 14:26:04.181933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143284 ] 00:24:12.219 { 00:24:12.219 "subsystems": [ 00:24:12.219 { 00:24:12.219 "subsystem": "bdev", 00:24:12.219 "config": [ 00:24:12.219 { 00:24:12.219 "params": { 00:24:12.219 "trtype": "pcie", 00:24:12.219 "traddr": "0000:00:06.0", 00:24:12.219 "name": "Nvme0" 00:24:12.219 }, 00:24:12.219 "method": "bdev_nvme_attach_controller" 00:24:12.219 }, 00:24:12.219 { 00:24:12.219 "method": "bdev_wait_for_examine" 00:24:12.219 } 00:24:12.219 ] 00:24:12.219 } 00:24:12.219 ] 00:24:12.219 } 00:24:12.478 [2024-11-18 14:26:04.329272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.478 [2024-11-18 14:26:04.400211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.737  [2024-11-18T14:26:05.069Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:24:12.995 00:24:12.995 14:26:04 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:12.995 14:26:04 -- dd/basic_rw.sh@23 -- # count=7 00:24:12.995 14:26:04 -- dd/basic_rw.sh@24 -- # count=7 00:24:12.995 14:26:04 -- dd/basic_rw.sh@25 -- # size=57344 00:24:12.995 14:26:04 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:24:12.995 14:26:04 -- dd/common.sh@98 -- # xtrace_disable 00:24:12.995 14:26:04 -- common/autotest_common.sh@10 -- # set +x 00:24:13.254 14:26:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:24:13.254 14:26:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:13.254 14:26:05 -- dd/common.sh@31 -- # xtrace_disable 00:24:13.254 14:26:05 -- common/autotest_common.sh@10 -- # set +x 00:24:13.513 [2024-11-18 14:26:05.360891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:13.513 [2024-11-18 14:26:05.361303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143304 ] 00:24:13.513 { 00:24:13.513 "subsystems": [ 00:24:13.513 { 00:24:13.513 "subsystem": "bdev", 00:24:13.513 "config": [ 00:24:13.513 { 00:24:13.513 "params": { 00:24:13.513 "trtype": "pcie", 00:24:13.513 "traddr": "0000:00:06.0", 00:24:13.513 "name": "Nvme0" 00:24:13.513 }, 00:24:13.513 "method": "bdev_nvme_attach_controller" 00:24:13.513 }, 00:24:13.513 { 00:24:13.513 "method": "bdev_wait_for_examine" 00:24:13.513 } 00:24:13.513 ] 00:24:13.513 } 00:24:13.513 ] 00:24:13.513 } 00:24:13.513 [2024-11-18 14:26:05.507873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.513 [2024-11-18 14:26:05.568743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.771  [2024-11-18T14:26:06.105Z] Copying: 56/56 [kB] (average 54 MBps) 00:24:14.031 00:24:14.031 14:26:05 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:24:14.031 14:26:05 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:14.031 14:26:05 -- dd/common.sh@31 -- # xtrace_disable 00:24:14.031 14:26:05 -- common/autotest_common.sh@10 -- # set +x 00:24:14.031 [2024-11-18 14:26:06.049034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:14.031 [2024-11-18 14:26:06.049489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143319 ] 00:24:14.031 { 00:24:14.031 "subsystems": [ 00:24:14.031 { 00:24:14.031 "subsystem": "bdev", 00:24:14.031 "config": [ 00:24:14.031 { 00:24:14.031 "params": { 00:24:14.031 "trtype": "pcie", 00:24:14.031 "traddr": "0000:00:06.0", 00:24:14.031 "name": "Nvme0" 00:24:14.031 }, 00:24:14.031 "method": "bdev_nvme_attach_controller" 00:24:14.031 }, 00:24:14.031 { 00:24:14.031 "method": "bdev_wait_for_examine" 00:24:14.031 } 00:24:14.031 ] 00:24:14.031 } 00:24:14.031 ] 00:24:14.031 } 00:24:14.290 [2024-11-18 14:26:06.196631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.290 [2024-11-18 14:26:06.282997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.550  [2024-11-18T14:26:06.883Z] Copying: 56/56 [kB] (average 54 MBps) 00:24:14.809 00:24:14.809 14:26:06 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:14.809 14:26:06 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:24:14.809 14:26:06 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:14.809 14:26:06 -- dd/common.sh@11 -- # local nvme_ref= 00:24:14.809 14:26:06 -- dd/common.sh@12 -- # local size=57344 00:24:14.809 14:26:06 -- dd/common.sh@14 -- # local bs=1048576 00:24:14.809 14:26:06 -- dd/common.sh@15 -- # local count=1 00:24:14.809 14:26:06 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:14.809 14:26:06 -- dd/common.sh@18 -- # gen_conf 00:24:14.809 14:26:06 -- dd/common.sh@31 -- # xtrace_disable 00:24:14.809 14:26:06 -- common/autotest_common.sh@10 -- # set +x 00:24:14.809 [2024-11-18 14:26:06.763811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:14.809 [2024-11-18 14:26:06.764190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143335 ] 00:24:14.809 { 00:24:14.809 "subsystems": [ 00:24:14.809 { 00:24:14.809 "subsystem": "bdev", 00:24:14.809 "config": [ 00:24:14.809 { 00:24:14.809 "params": { 00:24:14.809 "trtype": "pcie", 00:24:14.809 "traddr": "0000:00:06.0", 00:24:14.809 "name": "Nvme0" 00:24:14.809 }, 00:24:14.809 "method": "bdev_nvme_attach_controller" 00:24:14.809 }, 00:24:14.809 { 00:24:14.809 "method": "bdev_wait_for_examine" 00:24:14.809 } 00:24:14.809 ] 00:24:14.809 } 00:24:14.809 ] 00:24:14.809 } 00:24:15.068 [2024-11-18 14:26:06.901262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.068 [2024-11-18 14:26:06.965910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.068  [2024-11-18T14:26:07.400Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:15.326 00:24:15.326 14:26:07 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:15.326 14:26:07 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:15.326 14:26:07 -- dd/basic_rw.sh@23 -- # count=3 00:24:15.326 14:26:07 -- dd/basic_rw.sh@24 -- # count=3 00:24:15.326 14:26:07 -- dd/basic_rw.sh@25 -- # size=49152 00:24:15.326 14:26:07 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:24:15.326 14:26:07 -- dd/common.sh@98 -- # xtrace_disable 00:24:15.326 14:26:07 -- common/autotest_common.sh@10 -- # set +x 00:24:15.894 14:26:07 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:24:15.894 14:26:07 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:15.894 14:26:07 -- dd/common.sh@31 -- # xtrace_disable 00:24:15.894 14:26:07 -- common/autotest_common.sh@10 -- # set +x 00:24:15.894 { 00:24:15.894 "subsystems": [ 00:24:15.894 { 00:24:15.894 "subsystem": "bdev", 00:24:15.894 "config": [ 00:24:15.894 { 00:24:15.894 "params": { 00:24:15.894 "trtype": "pcie", 00:24:15.894 "traddr": "0000:00:06.0", 00:24:15.894 "name": "Nvme0" 00:24:15.894 }, 00:24:15.894 "method": "bdev_nvme_attach_controller" 00:24:15.894 }, 00:24:15.894 { 00:24:15.894 "method": "bdev_wait_for_examine" 00:24:15.894 } 00:24:15.894 ] 00:24:15.894 } 00:24:15.894 ] 00:24:15.894 } 00:24:15.894 [2024-11-18 14:26:07.881191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:15.894 [2024-11-18 14:26:07.881899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143355 ] 00:24:16.153 [2024-11-18 14:26:08.026924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.153 [2024-11-18 14:26:08.097170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.412  [2024-11-18T14:26:08.745Z] Copying: 48/48 [kB] (average 46 MBps) 00:24:16.671 00:24:16.671 14:26:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:24:16.671 14:26:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:16.671 14:26:08 -- dd/common.sh@31 -- # xtrace_disable 00:24:16.671 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:24:16.671 { 00:24:16.671 "subsystems": [ 00:24:16.671 { 00:24:16.671 "subsystem": "bdev", 00:24:16.671 "config": [ 00:24:16.671 { 00:24:16.671 "params": { 00:24:16.671 "trtype": "pcie", 00:24:16.671 "traddr": "0000:00:06.0", 00:24:16.671 "name": "Nvme0" 00:24:16.671 }, 00:24:16.671 "method": "bdev_nvme_attach_controller" 00:24:16.671 }, 00:24:16.671 { 00:24:16.671 "method": "bdev_wait_for_examine" 00:24:16.671 } 00:24:16.671 ] 00:24:16.671 } 00:24:16.671 ] 00:24:16.671 } 00:24:16.671 [2024-11-18 14:26:08.575767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:16.672 [2024-11-18 14:26:08.576440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143370 ] 00:24:16.672 [2024-11-18 14:26:08.722992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.931 [2024-11-18 14:26:08.807842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.931  [2024-11-18T14:26:09.264Z] Copying: 48/48 [kB] (average 46 MBps) 00:24:17.190 00:24:17.190 14:26:09 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:17.190 14:26:09 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:24:17.190 14:26:09 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:17.190 14:26:09 -- dd/common.sh@11 -- # local nvme_ref= 00:24:17.190 14:26:09 -- dd/common.sh@12 -- # local size=49152 00:24:17.190 14:26:09 -- dd/common.sh@14 -- # local bs=1048576 00:24:17.190 14:26:09 -- dd/common.sh@15 -- # local count=1 00:24:17.190 14:26:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:17.190 14:26:09 -- dd/common.sh@18 -- # gen_conf 00:24:17.190 14:26:09 -- dd/common.sh@31 -- # xtrace_disable 00:24:17.190 14:26:09 -- common/autotest_common.sh@10 -- # set +x 00:24:17.449 [2024-11-18 14:26:09.289348] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:17.449 [2024-11-18 14:26:09.289796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143391 ] 00:24:17.449 { 00:24:17.449 "subsystems": [ 00:24:17.449 { 00:24:17.449 "subsystem": "bdev", 00:24:17.449 "config": [ 00:24:17.449 { 00:24:17.449 "params": { 00:24:17.449 "trtype": "pcie", 00:24:17.449 "traddr": "0000:00:06.0", 00:24:17.449 "name": "Nvme0" 00:24:17.449 }, 00:24:17.449 "method": "bdev_nvme_attach_controller" 00:24:17.449 }, 00:24:17.449 { 00:24:17.449 "method": "bdev_wait_for_examine" 00:24:17.449 } 00:24:17.449 ] 00:24:17.449 } 00:24:17.449 ] 00:24:17.449 } 00:24:17.449 [2024-11-18 14:26:09.436511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.449 [2024-11-18 14:26:09.498007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.707  [2024-11-18T14:26:10.040Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:24:17.966 00:24:17.966 14:26:09 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:17.966 14:26:09 -- dd/basic_rw.sh@23 -- # count=3 00:24:17.966 14:26:09 -- dd/basic_rw.sh@24 -- # count=3 00:24:17.966 14:26:09 -- dd/basic_rw.sh@25 -- # size=49152 00:24:17.966 14:26:09 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:24:17.966 14:26:09 -- dd/common.sh@98 -- # xtrace_disable 00:24:17.966 14:26:09 -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 14:26:10 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:24:18.533 14:26:10 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:18.533 14:26:10 -- dd/common.sh@31 -- # xtrace_disable 00:24:18.533 14:26:10 -- common/autotest_common.sh@10 -- # set +x 00:24:18.533 [2024-11-18 14:26:10.401229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:18.533 [2024-11-18 14:26:10.401836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143411 ] 00:24:18.533 { 00:24:18.533 "subsystems": [ 00:24:18.533 { 00:24:18.533 "subsystem": "bdev", 00:24:18.533 "config": [ 00:24:18.533 { 00:24:18.533 "params": { 00:24:18.533 "trtype": "pcie", 00:24:18.533 "traddr": "0000:00:06.0", 00:24:18.533 "name": "Nvme0" 00:24:18.533 }, 00:24:18.533 "method": "bdev_nvme_attach_controller" 00:24:18.533 }, 00:24:18.533 { 00:24:18.533 "method": "bdev_wait_for_examine" 00:24:18.533 } 00:24:18.533 ] 00:24:18.533 } 00:24:18.533 ] 00:24:18.533 } 00:24:18.533 [2024-11-18 14:26:10.549364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.792 [2024-11-18 14:26:10.622189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.792  [2024-11-18T14:26:11.125Z] Copying: 48/48 [kB] (average 46 MBps) 00:24:19.051 00:24:19.051 14:26:11 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:19.051 14:26:11 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:24:19.051 14:26:11 -- dd/common.sh@31 -- # xtrace_disable 00:24:19.051 14:26:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.051 { 00:24:19.051 "subsystems": [ 00:24:19.051 { 00:24:19.051 "subsystem": "bdev", 00:24:19.051 "config": [ 00:24:19.051 { 00:24:19.051 "params": { 00:24:19.051 "trtype": "pcie", 00:24:19.051 "traddr": "0000:00:06.0", 00:24:19.051 "name": "Nvme0" 00:24:19.051 }, 00:24:19.051 "method": "bdev_nvme_attach_controller" 00:24:19.051 }, 00:24:19.051 { 00:24:19.051 "method": "bdev_wait_for_examine" 00:24:19.051 } 00:24:19.051 ] 00:24:19.051 } 00:24:19.051 ] 00:24:19.051 } 00:24:19.051 [2024-11-18 14:26:11.101978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:19.051 [2024-11-18 14:26:11.102372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143426 ] 00:24:19.310 [2024-11-18 14:26:11.248918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.310 [2024-11-18 14:26:11.313858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.569  [2024-11-18T14:26:11.902Z] Copying: 48/48 [kB] (average 46 MBps) 00:24:19.828 00:24:19.828 14:26:11 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:19.828 14:26:11 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:24:19.828 14:26:11 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:19.828 14:26:11 -- dd/common.sh@11 -- # local nvme_ref= 00:24:19.828 14:26:11 -- dd/common.sh@12 -- # local size=49152 00:24:19.828 14:26:11 -- dd/common.sh@14 -- # local bs=1048576 00:24:19.828 14:26:11 -- dd/common.sh@15 -- # local count=1 00:24:19.828 14:26:11 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:19.828 14:26:11 -- dd/common.sh@18 -- # gen_conf 00:24:19.828 14:26:11 -- dd/common.sh@31 -- # xtrace_disable 00:24:19.828 14:26:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.828 [2024-11-18 14:26:11.797424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:19.828 [2024-11-18 14:26:11.797873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143440 ] 00:24:19.828 { 00:24:19.828 "subsystems": [ 00:24:19.828 { 00:24:19.828 "subsystem": "bdev", 00:24:19.828 "config": [ 00:24:19.828 { 00:24:19.828 "params": { 00:24:19.828 "trtype": "pcie", 00:24:19.828 "traddr": "0000:00:06.0", 00:24:19.828 "name": "Nvme0" 00:24:19.828 }, 00:24:19.828 "method": "bdev_nvme_attach_controller" 00:24:19.828 }, 00:24:19.828 { 00:24:19.828 "method": "bdev_wait_for_examine" 00:24:19.828 } 00:24:19.828 ] 00:24:19.828 } 00:24:19.828 ] 00:24:19.828 } 00:24:20.087 [2024-11-18 14:26:11.944963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.087 [2024-11-18 14:26:12.018368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.345  [2024-11-18T14:26:12.678Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:24:20.604 00:24:20.604 ************************************ 00:24:20.604 END TEST dd_rw 00:24:20.604 ************************************ 00:24:20.604 00:24:20.604 real 0m15.528s 00:24:20.604 user 0m10.203s 00:24:20.604 sys 0m3.926s 00:24:20.604 14:26:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:20.604 14:26:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.604 14:26:12 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:24:20.604 14:26:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:20.604 14:26:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:20.604 14:26:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.604 ************************************ 00:24:20.604 START TEST dd_rw_offset 00:24:20.604 ************************************ 00:24:20.604 14:26:12 -- common/autotest_common.sh@1114 -- # basic_offset 00:24:20.604 14:26:12 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:24:20.604 14:26:12 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:24:20.604 14:26:12 -- dd/common.sh@98 -- # xtrace_disable 00:24:20.604 14:26:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.604 14:26:12 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:24:20.605 14:26:12 -- dd/basic_rw.sh@56 -- # data=hhmylwknws2fxzoqaib26hxsukrwii9sh7gx0qd5158vh1rtc3c3q2sayw7ok2zi6g64t7q3obl734s52w1hbafljk4rgwmue3z7fe25izkqtdivat17a4r7v8izsgu5dd8no03alxg130iyhfkzk7cf9agvemgn6m0gvj26txdkjchnhe89wb0ia385ez1c05893abgwxee2ugzjhywj5nasj5bnpg9yp866rcg2tqpn4nvabrg2uxuoe3i2fqq955ri0gxjva2oty7m9ry37n9n6rv8xolpb2ix8g9h9n03uscshdjmt51dxqsaasrzjdex427fkvo5zrh5kf8987fmaxf1pcpcof8fk07cn41hgtt1gqlzaalq8a46qz5n4v6i62c150ea3j992frsy9zr5q4bhfx6krewtjbkwzp7c2y2zbp3gpuwd33j5c7mtz5pmiu19jl9fywy5lbauhv348cp4ftcp64ln7q08wcg7uepkb066t14x96mpcdjxqys3lynw6i7sjsv97gag7kr0tkm63ljhzitjb4crhm4vvko0v1g84v4nc8m0l0a12u0uc7du23akbo7ar00f3ygze2dc7wer2xp9tps7v3mg1qv1smuyko63igbs774id92n7qej4xgb0tgq3sx9jiwufmhsloip2omo6iuin2i2luv56rxkqi2po3f1j96ni3z6p6r84tzjxroibpmyn6xdvct0cuxo2w8w30i5eqiti308tb5ccusvxev7dpqnralp1u9m1yeugx628466yn964c8i1lqzkzvjuoc4zvy85h5hqe6d2ywn8hgeowpsdffzvvdj15rf37l9rlg1vn4sz1sxjbx7o49xknw0ut2byo0iljbsuhcx56o179fhchmb516ogzn6o59m7ndzito8zilg55mye6aqzisqaaepeftkfbcrtr1nts81lgox3puc9vxstpw5n41wyikthochcnslmz3xdho2shaxwzxpuhy32cczugawzmzs1mtiok8t9o8goxu2bk1qc3j8l3fnkufpnj3jn7duzsmiwky87zzn3ixhugfjrjcq3354ld0jocsv8r4ql4953s4w2t8on67jp7fe7kvqmokqhbtsnkn49zkk4m45g0v6jwr0svmxyks65atfdle3z8761egbz9rpsvd53dxzkx8naixdwb71xljcaduwi6y9ags8sfdnjownftw2hanoked6q9xzv228hqq6c51hzu05ojcpubk3csyqyiem6yka8xwrko8o3ezc4c8sfyg3r8gbcy0tbjhyr1f0uy2ej507m2ek4cvd6fnlb3o9qgmjhdbnxogl3438gdly1rtrkl8wqx6ax1i33zdenim92mr9806lbn4l5uxmasfbxuhbxu43t94nx4ima1jupvfxquetwdfgs77thmllj2qis3bkp8jfw540af8eeaztm0hzpbopql8h3njsl8qgwsin5tfag184bz4facs9ybap4mahjhf6om2ezyp51xjayzszsosf7rzbdcm4b2f02vab4fu8w6oqrnqcz6nu5b1j59gqeea5kggh5y7hezgma35p8jftw97kaqprujbdtz6oz23olca4svjnib9d2ox7fl8gs7qptq67zytgaeb955tm4qslfoxrxpv6kv291npiwu4p3lm6ciczez0pkgeeao8bozpy2eucdv1bp4ahpkxxdef4bunnlgqan75rm4y0s12xo6gzcq2rpf4i24bm4o2gx6r9un2uh0eb1tupyy4zu1y70c6pe3joz2nss1d0jqy2an46s003ny27t7j34u3d9cv3spc3vpx2jc7lwakuj4pwu7rjrrimb3h05r0i6twyrpf20tlczkmrf297icvlzbev9p2l9xd0fai41vk9ptkd0zv4mri4bk8mhwom23k3h2na9lwu7d898661p2ti86v7g3iqvxn6c6slnog9bdb9hmoyjkamkw15pgwnaj4t89lcu8cqeoqu1ffajkgckuopsoiwpj4reeq6p082b4de92ie7g0c6ui9mdqnidkm2553sc813qtjd1k6759qdy5fgrn0zrg5kvcd3kfahqo7ut3uq9dfups1nbnq7i1yrgfqmousxbvl0efk80ian8c8lpu2rmptyde8eo21qifbkwyymuvrpg9xbpkuvx0i1ae5eoemgwcsgcc9po6zoe61s2fctm27yszdot34ip19h6idxwm2zsyiy4mmeyw8b23ahvcd4jmuw590n07qttupy96pb7gz6uz52w604n7avdpn9j9j7gtsbofbpjhnogywcliencerzej3w40hq26ztyigmqa6nngnqm0jh4w6ffzfua77oohcu0vi54x029f1oau54ifclffg8rwt2t6enj6rcoucncauavlaakmj9ghh53tqw44y8l4vr85xcpmlfzs7y3q0ak3naxwwd51yib5is3yx4gbfgm12cq1bsjdc16332btnvbpc5515y9yyv9b7dotq195ydfujycqiujznbkr4u58vpbppfqvcwfay05728vau8roh20pbobl38fq6zuwwpafrmak0culhyqmd4mcpmp7na5o7ftietkrso5z92gi4anueo9dfpbna2gc5nqow1bco5vvw77tsjqffsne70vc2rbrztrpe9o40kt0oozgdvhehp6zkkixsfnwu6jeuc36w0richb88yl22fcbb61l5v04m23e1vxjbxso9ertfzspknbmvlahby1p4ami0dmjhxb5qvdtcgolocss1o6a1mouhafm7xa3s5w5km9nw16ju2r1x0tcy0l9oi2qpukf2f8wowi1q9iycyfzde9ysayycs45rlqzcenbdfh32xctskvzuqh68jzxlk12aipbrdj4rrqqt4wooa0p7f4c2jlgaq0ycjbnexx6pc4hdezoqvbl716wnqujjqc2jq6cf28u0a6eratsuqdzlrvelmxe1ofyabshola9r000gm7my6ykydy88jsta41zyt9hqt7xyx16e4uci52po1b2r7czrrshvr1igiz6knn410ijdcmem8nlodgubfjwnaqio5523dtdbyojxy09vyceojay3zxb6u62xovv395lzxo8cghkkuza0g6do80525bjoa7xjw06t8hahx1tvk5qwnved6xe3yv6gzl18qrtn788spnpcs5oeih4n41ebereh8k70wahbmgwwt0lm609vvcfmb3d2p3ymzcnpni1jsr8fwrr3qijukiih2sov2n3yueo5tq3r3mbis3mkzqida23llhxe2dpppc306qbzgf4gdg1k5pevobstynqk2vmh0kmg4bwwa3o8xs4e3w28oi6f3ohhgvgp3l2iornnsnvu83rwoys0aku6r9ba7y9k611ezhhhbs3d94u11albhkkbzip4cdzkib5vfefql1o77p2lyr2ayojb1tg2nb03kfmxxjryl4s2wj4wq0q8o0wee2lvvsymau6b1fe5jmcdxowkn6tos1smpfzgfvaxyidgv0y0m3gaa3dtuztq44rq99n07su3iwhmk9nniumknwh54m77nrxsmsq564iob9fl4jymq8kyt4k9zuu08taxs4tfveovzkni86k7i26l47g7ara8br9kfsraq0zs6vp4drhsokvkz4rsayzbpe31j8sbjjsud8ar4p1slirbp5yst0tfaex1ljkzn0kdll7pusebjbav4j711thy25h94tbhehdtnqsevzorl2sltze4qeh9v0w28srxwlkjvtm6yupqq54sevigotilxcd62aph0zf6p576m086890aa9iiqdq5hviv1xaq9xg5urrdw5k9232gx18v1cxooxb5p1985xxusliaez304hkgvwem65rmo0kr93dfm123l4vc8ehbfzxdgcfazyn06ztb8qndtggx01qakwvvlxzm431rgbwa9cl6kfuac5zshaa2w8rnavexttx5jaykphk5vniwd7zc9xy0rrxp25j5kgm2p4hwk4rbkcdyqkbed5dkiy5kgg2xvdp80vmfrbj5pb 00:24:20.605 14:26:12 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:24:20.605 14:26:12 -- dd/basic_rw.sh@59 -- # gen_conf 00:24:20.605 14:26:12 -- dd/common.sh@31 -- # xtrace_disable 00:24:20.605 14:26:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.605 [2024-11-18 14:26:12.629392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:20.605 [2024-11-18 14:26:12.629815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143482 ] 00:24:20.605 { 00:24:20.605 "subsystems": [ 00:24:20.605 { 00:24:20.605 "subsystem": "bdev", 00:24:20.605 "config": [ 00:24:20.605 { 00:24:20.605 "params": { 00:24:20.605 "trtype": "pcie", 00:24:20.605 "traddr": "0000:00:06.0", 00:24:20.605 "name": "Nvme0" 00:24:20.605 }, 00:24:20.605 "method": "bdev_nvme_attach_controller" 00:24:20.605 }, 00:24:20.605 { 00:24:20.605 "method": "bdev_wait_for_examine" 00:24:20.605 } 00:24:20.605 ] 00:24:20.605 } 00:24:20.605 ] 00:24:20.605 } 00:24:20.864 [2024-11-18 14:26:12.780891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.864 [2024-11-18 14:26:12.859142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.122  [2024-11-18T14:26:13.455Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:24:21.381 00:24:21.381 14:26:13 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:24:21.381 14:26:13 -- dd/basic_rw.sh@65 -- # gen_conf 00:24:21.381 14:26:13 -- dd/common.sh@31 -- # xtrace_disable 00:24:21.381 14:26:13 -- common/autotest_common.sh@10 -- # set +x 00:24:21.381 [2024-11-18 14:26:13.355131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:21.381 [2024-11-18 14:26:13.355787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143499 ] 00:24:21.381 { 00:24:21.381 "subsystems": [ 00:24:21.381 { 00:24:21.381 "subsystem": "bdev", 00:24:21.381 "config": [ 00:24:21.381 { 00:24:21.381 "params": { 00:24:21.381 "trtype": "pcie", 00:24:21.381 "traddr": "0000:00:06.0", 00:24:21.381 "name": "Nvme0" 00:24:21.381 }, 00:24:21.381 "method": "bdev_nvme_attach_controller" 00:24:21.381 }, 00:24:21.381 { 00:24:21.381 "method": "bdev_wait_for_examine" 00:24:21.381 } 00:24:21.381 ] 00:24:21.381 } 00:24:21.381 ] 00:24:21.381 } 00:24:21.639 [2024-11-18 14:26:13.501910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.639 [2024-11-18 14:26:13.581274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.898  [2024-11-18T14:26:14.232Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:24:22.158 00:24:22.158 14:26:14 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:24:22.159 14:26:14 -- dd/basic_rw.sh@72 -- # [[ hhmylwknws2fxzoqaib26hxsukrwii9sh7gx0qd5158vh1rtc3c3q2sayw7ok2zi6g64t7q3obl734s52w1hbafljk4rgwmue3z7fe25izkqtdivat17a4r7v8izsgu5dd8no03alxg130iyhfkzk7cf9agvemgn6m0gvj26txdkjchnhe89wb0ia385ez1c05893abgwxee2ugzjhywj5nasj5bnpg9yp866rcg2tqpn4nvabrg2uxuoe3i2fqq955ri0gxjva2oty7m9ry37n9n6rv8xolpb2ix8g9h9n03uscshdjmt51dxqsaasrzjdex427fkvo5zrh5kf8987fmaxf1pcpcof8fk07cn41hgtt1gqlzaalq8a46qz5n4v6i62c150ea3j992frsy9zr5q4bhfx6krewtjbkwzp7c2y2zbp3gpuwd33j5c7mtz5pmiu19jl9fywy5lbauhv348cp4ftcp64ln7q08wcg7uepkb066t14x96mpcdjxqys3lynw6i7sjsv97gag7kr0tkm63ljhzitjb4crhm4vvko0v1g84v4nc8m0l0a12u0uc7du23akbo7ar00f3ygze2dc7wer2xp9tps7v3mg1qv1smuyko63igbs774id92n7qej4xgb0tgq3sx9jiwufmhsloip2omo6iuin2i2luv56rxkqi2po3f1j96ni3z6p6r84tzjxroibpmyn6xdvct0cuxo2w8w30i5eqiti308tb5ccusvxev7dpqnralp1u9m1yeugx628466yn964c8i1lqzkzvjuoc4zvy85h5hqe6d2ywn8hgeowpsdffzvvdj15rf37l9rlg1vn4sz1sxjbx7o49xknw0ut2byo0iljbsuhcx56o179fhchmb516ogzn6o59m7ndzito8zilg55mye6aqzisqaaepeftkfbcrtr1nts81lgox3puc9vxstpw5n41wyikthochcnslmz3xdho2shaxwzxpuhy32cczugawzmzs1mtiok8t9o8goxu2bk1qc3j8l3fnkufpnj3jn7duzsmiwky87zzn3ixhugfjrjcq3354ld0jocsv8r4ql4953s4w2t8on67jp7fe7kvqmokqhbtsnkn49zkk4m45g0v6jwr0svmxyks65atfdle3z8761egbz9rpsvd53dxzkx8naixdwb71xljcaduwi6y9ags8sfdnjownftw2hanoked6q9xzv228hqq6c51hzu05ojcpubk3csyqyiem6yka8xwrko8o3ezc4c8sfyg3r8gbcy0tbjhyr1f0uy2ej507m2ek4cvd6fnlb3o9qgmjhdbnxogl3438gdly1rtrkl8wqx6ax1i33zdenim92mr9806lbn4l5uxmasfbxuhbxu43t94nx4ima1jupvfxquetwdfgs77thmllj2qis3bkp8jfw540af8eeaztm0hzpbopql8h3njsl8qgwsin5tfag184bz4facs9ybap4mahjhf6om2ezyp51xjayzszsosf7rzbdcm4b2f02vab4fu8w6oqrnqcz6nu5b1j59gqeea5kggh5y7hezgma35p8jftw97kaqprujbdtz6oz23olca4svjnib9d2ox7fl8gs7qptq67zytgaeb955tm4qslfoxrxpv6kv291npiwu4p3lm6ciczez0pkgeeao8bozpy2eucdv1bp4ahpkxxdef4bunnlgqan75rm4y0s12xo6gzcq2rpf4i24bm4o2gx6r9un2uh0eb1tupyy4zu1y70c6pe3joz2nss1d0jqy2an46s003ny27t7j34u3d9cv3spc3vpx2jc7lwakuj4pwu7rjrrimb3h05r0i6twyrpf20tlczkmrf297icvlzbev9p2l9xd0fai41vk9ptkd0zv4mri4bk8mhwom23k3h2na9lwu7d898661p2ti86v7g3iqvxn6c6slnog9bdb9hmoyjkamkw15pgwnaj4t89lcu8cqeoqu1ffajkgckuopsoiwpj4reeq6p082b4de92ie7g0c6ui9mdqnidkm2553sc813qtjd1k6759qdy5fgrn0zrg5kvcd3kfahqo7ut3uq9dfups1nbnq7i1yrgfqmousxbvl0efk80ian8c8lpu2rmptyde8eo21qifbkwyymuvrpg9xbpkuvx0i1ae5eoemgwcsgcc9po6zoe61s2fctm27yszdot34ip19h6idxwm2zsyiy4mmeyw8b23ahvcd4jmuw590n07qttupy96pb7gz6uz52w604n7avdpn9j9j7gtsbofbpjhnogywcliencerzej3w40hq26ztyigmqa6nngnqm0jh4w6ffzfua77oohcu0vi54x029f1oau54ifclffg8rwt2t6enj6rcoucncauavlaakmj9ghh53tqw44y8l4vr85xcpmlfzs7y3q0ak3naxwwd51yib5is3yx4gbfgm12cq1bsjdc16332btnvbpc5515y9yyv9b7dotq195ydfujycqiujznbkr4u58vpbppfqvcwfay05728vau8roh20pbobl38fq6zuwwpafrmak0culhyqmd4mcpmp7na5o7ftietkrso5z92gi4anueo9dfpbna2gc5nqow1bco5vvw77tsjqffsne70vc2rbrztrpe9o40kt0oozgdvhehp6zkkixsfnwu6jeuc36w0richb88yl22fcbb61l5v04m23e1vxjbxso9ertfzspknbmvlahby1p4ami0dmjhxb5qvdtcgolocss1o6a1mouhafm7xa3s5w5km9nw16ju2r1x0tcy0l9oi2qpukf2f8wowi1q9iycyfzde9ysayycs45rlqzcenbdfh32xctskvzuqh68jzxlk12aipbrdj4rrqqt4wooa0p7f4c2jlgaq0ycjbnexx6pc4hdezoqvbl716wnqujjqc2jq6cf28u0a6eratsuqdzlrvelmxe1ofyabshola9r000gm7my6ykydy88jsta41zyt9hqt7xyx16e4uci52po1b2r7czrrshvr1igiz6knn410ijdcmem8nlodgubfjwnaqio5523dtdbyojxy09vyceojay3zxb6u62xovv395lzxo8cghkkuza0g6do80525bjoa7xjw06t8hahx1tvk5qwnved6xe3yv6gzl18qrtn788spnpcs5oeih4n41ebereh8k70wahbmgwwt0lm609vvcfmb3d2p3ymzcnpni1jsr8fwrr3qijukiih2sov2n3yueo5tq3r3mbis3mkzqida23llhxe2dpppc306qbzgf4gdg1k5pevobstynqk2vmh0kmg4bwwa3o8xs4e3w28oi6f3ohhgvgp3l2iornnsnvu83rwoys0aku6r9ba7y9k611ezhhhbs3d94u11albhkkbzip4cdzkib5vfefql1o77p2lyr2ayojb1tg2nb03kfmxxjryl4s2wj4wq0q8o0wee2lvvsymau6b1fe5jmcdxowkn6tos1smpfzgfvaxyidgv0y0m3gaa3dtuztq44rq99n07su3iwhmk9nniumknwh54m77nrxsmsq564iob9fl4jymq8kyt4k9zuu08taxs4tfveovzkni86k7i26l47g7ara8br9kfsraq0zs6vp4drhsokvkz4rsayzbpe31j8sbjjsud8ar4p1slirbp5yst0tfaex1ljkzn0kdll7pusebjbav4j711thy25h94tbhehdtnqsevzorl2sltze4qeh9v0w28srxwlkjvtm6yupqq54sevigotilxcd62aph0zf6p576m086890aa9iiqdq5hviv1xaq9xg5urrdw5k9232gx18v1cxooxb5p1985xxusliaez304hkgvwem65rmo0kr93dfm123l4vc8ehbfzxdgcfazyn06ztb8qndtggx01qakwvvlxzm431rgbwa9cl6kfuac5zshaa2w8rnavexttx5jaykphk5vniwd7zc9xy0rrxp25j5kgm2p4hwk4rbkcdyqkbed5dkiy5kgg2xvdp80vmfrbj5pb == \h\h\m\y\l\w\k\n\w\s\2\f\x\z\o\q\a\i\b\2\6\h\x\s\u\k\r\w\i\i\9\s\h\7\g\x\0\q\d\5\1\5\8\v\h\1\r\t\c\3\c\3\q\2\s\a\y\w\7\o\k\2\z\i\6\g\6\4\t\7\q\3\o\b\l\7\3\4\s\5\2\w\1\h\b\a\f\l\j\k\4\r\g\w\m\u\e\3\z\7\f\e\2\5\i\z\k\q\t\d\i\v\a\t\1\7\a\4\r\7\v\8\i\z\s\g\u\5\d\d\8\n\o\0\3\a\l\x\g\1\3\0\i\y\h\f\k\z\k\7\c\f\9\a\g\v\e\m\g\n\6\m\0\g\v\j\2\6\t\x\d\k\j\c\h\n\h\e\8\9\w\b\0\i\a\3\8\5\e\z\1\c\0\5\8\9\3\a\b\g\w\x\e\e\2\u\g\z\j\h\y\w\j\5\n\a\s\j\5\b\n\p\g\9\y\p\8\6\6\r\c\g\2\t\q\p\n\4\n\v\a\b\r\g\2\u\x\u\o\e\3\i\2\f\q\q\9\5\5\r\i\0\g\x\j\v\a\2\o\t\y\7\m\9\r\y\3\7\n\9\n\6\r\v\8\x\o\l\p\b\2\i\x\8\g\9\h\9\n\0\3\u\s\c\s\h\d\j\m\t\5\1\d\x\q\s\a\a\s\r\z\j\d\e\x\4\2\7\f\k\v\o\5\z\r\h\5\k\f\8\9\8\7\f\m\a\x\f\1\p\c\p\c\o\f\8\f\k\0\7\c\n\4\1\h\g\t\t\1\g\q\l\z\a\a\l\q\8\a\4\6\q\z\5\n\4\v\6\i\6\2\c\1\5\0\e\a\3\j\9\9\2\f\r\s\y\9\z\r\5\q\4\b\h\f\x\6\k\r\e\w\t\j\b\k\w\z\p\7\c\2\y\2\z\b\p\3\g\p\u\w\d\3\3\j\5\c\7\m\t\z\5\p\m\i\u\1\9\j\l\9\f\y\w\y\5\l\b\a\u\h\v\3\4\8\c\p\4\f\t\c\p\6\4\l\n\7\q\0\8\w\c\g\7\u\e\p\k\b\0\6\6\t\1\4\x\9\6\m\p\c\d\j\x\q\y\s\3\l\y\n\w\6\i\7\s\j\s\v\9\7\g\a\g\7\k\r\0\t\k\m\6\3\l\j\h\z\i\t\j\b\4\c\r\h\m\4\v\v\k\o\0\v\1\g\8\4\v\4\n\c\8\m\0\l\0\a\1\2\u\0\u\c\7\d\u\2\3\a\k\b\o\7\a\r\0\0\f\3\y\g\z\e\2\d\c\7\w\e\r\2\x\p\9\t\p\s\7\v\3\m\g\1\q\v\1\s\m\u\y\k\o\6\3\i\g\b\s\7\7\4\i\d\9\2\n\7\q\e\j\4\x\g\b\0\t\g\q\3\s\x\9\j\i\w\u\f\m\h\s\l\o\i\p\2\o\m\o\6\i\u\i\n\2\i\2\l\u\v\5\6\r\x\k\q\i\2\p\o\3\f\1\j\9\6\n\i\3\z\6\p\6\r\8\4\t\z\j\x\r\o\i\b\p\m\y\n\6\x\d\v\c\t\0\c\u\x\o\2\w\8\w\3\0\i\5\e\q\i\t\i\3\0\8\t\b\5\c\c\u\s\v\x\e\v\7\d\p\q\n\r\a\l\p\1\u\9\m\1\y\e\u\g\x\6\2\8\4\6\6\y\n\9\6\4\c\8\i\1\l\q\z\k\z\v\j\u\o\c\4\z\v\y\8\5\h\5\h\q\e\6\d\2\y\w\n\8\h\g\e\o\w\p\s\d\f\f\z\v\v\d\j\1\5\r\f\3\7\l\9\r\l\g\1\v\n\4\s\z\1\s\x\j\b\x\7\o\4\9\x\k\n\w\0\u\t\2\b\y\o\0\i\l\j\b\s\u\h\c\x\5\6\o\1\7\9\f\h\c\h\m\b\5\1\6\o\g\z\n\6\o\5\9\m\7\n\d\z\i\t\o\8\z\i\l\g\5\5\m\y\e\6\a\q\z\i\s\q\a\a\e\p\e\f\t\k\f\b\c\r\t\r\1\n\t\s\8\1\l\g\o\x\3\p\u\c\9\v\x\s\t\p\w\5\n\4\1\w\y\i\k\t\h\o\c\h\c\n\s\l\m\z\3\x\d\h\o\2\s\h\a\x\w\z\x\p\u\h\y\3\2\c\c\z\u\g\a\w\z\m\z\s\1\m\t\i\o\k\8\t\9\o\8\g\o\x\u\2\b\k\1\q\c\3\j\8\l\3\f\n\k\u\f\p\n\j\3\j\n\7\d\u\z\s\m\i\w\k\y\8\7\z\z\n\3\i\x\h\u\g\f\j\r\j\c\q\3\3\5\4\l\d\0\j\o\c\s\v\8\r\4\q\l\4\9\5\3\s\4\w\2\t\8\o\n\6\7\j\p\7\f\e\7\k\v\q\m\o\k\q\h\b\t\s\n\k\n\4\9\z\k\k\4\m\4\5\g\0\v\6\j\w\r\0\s\v\m\x\y\k\s\6\5\a\t\f\d\l\e\3\z\8\7\6\1\e\g\b\z\9\r\p\s\v\d\5\3\d\x\z\k\x\8\n\a\i\x\d\w\b\7\1\x\l\j\c\a\d\u\w\i\6\y\9\a\g\s\8\s\f\d\n\j\o\w\n\f\t\w\2\h\a\n\o\k\e\d\6\q\9\x\z\v\2\2\8\h\q\q\6\c\5\1\h\z\u\0\5\o\j\c\p\u\b\k\3\c\s\y\q\y\i\e\m\6\y\k\a\8\x\w\r\k\o\8\o\3\e\z\c\4\c\8\s\f\y\g\3\r\8\g\b\c\y\0\t\b\j\h\y\r\1\f\0\u\y\2\e\j\5\0\7\m\2\e\k\4\c\v\d\6\f\n\l\b\3\o\9\q\g\m\j\h\d\b\n\x\o\g\l\3\4\3\8\g\d\l\y\1\r\t\r\k\l\8\w\q\x\6\a\x\1\i\3\3\z\d\e\n\i\m\9\2\m\r\9\8\0\6\l\b\n\4\l\5\u\x\m\a\s\f\b\x\u\h\b\x\u\4\3\t\9\4\n\x\4\i\m\a\1\j\u\p\v\f\x\q\u\e\t\w\d\f\g\s\7\7\t\h\m\l\l\j\2\q\i\s\3\b\k\p\8\j\f\w\5\4\0\a\f\8\e\e\a\z\t\m\0\h\z\p\b\o\p\q\l\8\h\3\n\j\s\l\8\q\g\w\s\i\n\5\t\f\a\g\1\8\4\b\z\4\f\a\c\s\9\y\b\a\p\4\m\a\h\j\h\f\6\o\m\2\e\z\y\p\5\1\x\j\a\y\z\s\z\s\o\s\f\7\r\z\b\d\c\m\4\b\2\f\0\2\v\a\b\4\f\u\8\w\6\o\q\r\n\q\c\z\6\n\u\5\b\1\j\5\9\g\q\e\e\a\5\k\g\g\h\5\y\7\h\e\z\g\m\a\3\5\p\8\j\f\t\w\9\7\k\a\q\p\r\u\j\b\d\t\z\6\o\z\2\3\o\l\c\a\4\s\v\j\n\i\b\9\d\2\o\x\7\f\l\8\g\s\7\q\p\t\q\6\7\z\y\t\g\a\e\b\9\5\5\t\m\4\q\s\l\f\o\x\r\x\p\v\6\k\v\2\9\1\n\p\i\w\u\4\p\3\l\m\6\c\i\c\z\e\z\0\p\k\g\e\e\a\o\8\b\o\z\p\y\2\e\u\c\d\v\1\b\p\4\a\h\p\k\x\x\d\e\f\4\b\u\n\n\l\g\q\a\n\7\5\r\m\4\y\0\s\1\2\x\o\6\g\z\c\q\2\r\p\f\4\i\2\4\b\m\4\o\2\g\x\6\r\9\u\n\2\u\h\0\e\b\1\t\u\p\y\y\4\z\u\1\y\7\0\c\6\p\e\3\j\o\z\2\n\s\s\1\d\0\j\q\y\2\a\n\4\6\s\0\0\3\n\y\2\7\t\7\j\3\4\u\3\d\9\c\v\3\s\p\c\3\v\p\x\2\j\c\7\l\w\a\k\u\j\4\p\w\u\7\r\j\r\r\i\m\b\3\h\0\5\r\0\i\6\t\w\y\r\p\f\2\0\t\l\c\z\k\m\r\f\2\9\7\i\c\v\l\z\b\e\v\9\p\2\l\9\x\d\0\f\a\i\4\1\v\k\9\p\t\k\d\0\z\v\4\m\r\i\4\b\k\8\m\h\w\o\m\2\3\k\3\h\2\n\a\9\l\w\u\7\d\8\9\8\6\6\1\p\2\t\i\8\6\v\7\g\3\i\q\v\x\n\6\c\6\s\l\n\o\g\9\b\d\b\9\h\m\o\y\j\k\a\m\k\w\1\5\p\g\w\n\a\j\4\t\8\9\l\c\u\8\c\q\e\o\q\u\1\f\f\a\j\k\g\c\k\u\o\p\s\o\i\w\p\j\4\r\e\e\q\6\p\0\8\2\b\4\d\e\9\2\i\e\7\g\0\c\6\u\i\9\m\d\q\n\i\d\k\m\2\5\5\3\s\c\8\1\3\q\t\j\d\1\k\6\7\5\9\q\d\y\5\f\g\r\n\0\z\r\g\5\k\v\c\d\3\k\f\a\h\q\o\7\u\t\3\u\q\9\d\f\u\p\s\1\n\b\n\q\7\i\1\y\r\g\f\q\m\o\u\s\x\b\v\l\0\e\f\k\8\0\i\a\n\8\c\8\l\p\u\2\r\m\p\t\y\d\e\8\e\o\2\1\q\i\f\b\k\w\y\y\m\u\v\r\p\g\9\x\b\p\k\u\v\x\0\i\1\a\e\5\e\o\e\m\g\w\c\s\g\c\c\9\p\o\6\z\o\e\6\1\s\2\f\c\t\m\2\7\y\s\z\d\o\t\3\4\i\p\1\9\h\6\i\d\x\w\m\2\z\s\y\i\y\4\m\m\e\y\w\8\b\2\3\a\h\v\c\d\4\j\m\u\w\5\9\0\n\0\7\q\t\t\u\p\y\9\6\p\b\7\g\z\6\u\z\5\2\w\6\0\4\n\7\a\v\d\p\n\9\j\9\j\7\g\t\s\b\o\f\b\p\j\h\n\o\g\y\w\c\l\i\e\n\c\e\r\z\e\j\3\w\4\0\h\q\2\6\z\t\y\i\g\m\q\a\6\n\n\g\n\q\m\0\j\h\4\w\6\f\f\z\f\u\a\7\7\o\o\h\c\u\0\v\i\5\4\x\0\2\9\f\1\o\a\u\5\4\i\f\c\l\f\f\g\8\r\w\t\2\t\6\e\n\j\6\r\c\o\u\c\n\c\a\u\a\v\l\a\a\k\m\j\9\g\h\h\5\3\t\q\w\4\4\y\8\l\4\v\r\8\5\x\c\p\m\l\f\z\s\7\y\3\q\0\a\k\3\n\a\x\w\w\d\5\1\y\i\b\5\i\s\3\y\x\4\g\b\f\g\m\1\2\c\q\1\b\s\j\d\c\1\6\3\3\2\b\t\n\v\b\p\c\5\5\1\5\y\9\y\y\v\9\b\7\d\o\t\q\1\9\5\y\d\f\u\j\y\c\q\i\u\j\z\n\b\k\r\4\u\5\8\v\p\b\p\p\f\q\v\c\w\f\a\y\0\5\7\2\8\v\a\u\8\r\o\h\2\0\p\b\o\b\l\3\8\f\q\6\z\u\w\w\p\a\f\r\m\a\k\0\c\u\l\h\y\q\m\d\4\m\c\p\m\p\7\n\a\5\o\7\f\t\i\e\t\k\r\s\o\5\z\9\2\g\i\4\a\n\u\e\o\9\d\f\p\b\n\a\2\g\c\5\n\q\o\w\1\b\c\o\5\v\v\w\7\7\t\s\j\q\f\f\s\n\e\7\0\v\c\2\r\b\r\z\t\r\p\e\9\o\4\0\k\t\0\o\o\z\g\d\v\h\e\h\p\6\z\k\k\i\x\s\f\n\w\u\6\j\e\u\c\3\6\w\0\r\i\c\h\b\8\8\y\l\2\2\f\c\b\b\6\1\l\5\v\0\4\m\2\3\e\1\v\x\j\b\x\s\o\9\e\r\t\f\z\s\p\k\n\b\m\v\l\a\h\b\y\1\p\4\a\m\i\0\d\m\j\h\x\b\5\q\v\d\t\c\g\o\l\o\c\s\s\1\o\6\a\1\m\o\u\h\a\f\m\7\x\a\3\s\5\w\5\k\m\9\n\w\1\6\j\u\2\r\1\x\0\t\c\y\0\l\9\o\i\2\q\p\u\k\f\2\f\8\w\o\w\i\1\q\9\i\y\c\y\f\z\d\e\9\y\s\a\y\y\c\s\4\5\r\l\q\z\c\e\n\b\d\f\h\3\2\x\c\t\s\k\v\z\u\q\h\6\8\j\z\x\l\k\1\2\a\i\p\b\r\d\j\4\r\r\q\q\t\4\w\o\o\a\0\p\7\f\4\c\2\j\l\g\a\q\0\y\c\j\b\n\e\x\x\6\p\c\4\h\d\e\z\o\q\v\b\l\7\1\6\w\n\q\u\j\j\q\c\2\j\q\6\c\f\2\8\u\0\a\6\e\r\a\t\s\u\q\d\z\l\r\v\e\l\m\x\e\1\o\f\y\a\b\s\h\o\l\a\9\r\0\0\0\g\m\7\m\y\6\y\k\y\d\y\8\8\j\s\t\a\4\1\z\y\t\9\h\q\t\7\x\y\x\1\6\e\4\u\c\i\5\2\p\o\1\b\2\r\7\c\z\r\r\s\h\v\r\1\i\g\i\z\6\k\n\n\4\1\0\i\j\d\c\m\e\m\8\n\l\o\d\g\u\b\f\j\w\n\a\q\i\o\5\5\2\3\d\t\d\b\y\o\j\x\y\0\9\v\y\c\e\o\j\a\y\3\z\x\b\6\u\6\2\x\o\v\v\3\9\5\l\z\x\o\8\c\g\h\k\k\u\z\a\0\g\6\d\o\8\0\5\2\5\b\j\o\a\7\x\j\w\0\6\t\8\h\a\h\x\1\t\v\k\5\q\w\n\v\e\d\6\x\e\3\y\v\6\g\z\l\1\8\q\r\t\n\7\8\8\s\p\n\p\c\s\5\o\e\i\h\4\n\4\1\e\b\e\r\e\h\8\k\7\0\w\a\h\b\m\g\w\w\t\0\l\m\6\0\9\v\v\c\f\m\b\3\d\2\p\3\y\m\z\c\n\p\n\i\1\j\s\r\8\f\w\r\r\3\q\i\j\u\k\i\i\h\2\s\o\v\2\n\3\y\u\e\o\5\t\q\3\r\3\m\b\i\s\3\m\k\z\q\i\d\a\2\3\l\l\h\x\e\2\d\p\p\p\c\3\0\6\q\b\z\g\f\4\g\d\g\1\k\5\p\e\v\o\b\s\t\y\n\q\k\2\v\m\h\0\k\m\g\4\b\w\w\a\3\o\8\x\s\4\e\3\w\2\8\o\i\6\f\3\o\h\h\g\v\g\p\3\l\2\i\o\r\n\n\s\n\v\u\8\3\r\w\o\y\s\0\a\k\u\6\r\9\b\a\7\y\9\k\6\1\1\e\z\h\h\h\b\s\3\d\9\4\u\1\1\a\l\b\h\k\k\b\z\i\p\4\c\d\z\k\i\b\5\v\f\e\f\q\l\1\o\7\7\p\2\l\y\r\2\a\y\o\j\b\1\t\g\2\n\b\0\3\k\f\m\x\x\j\r\y\l\4\s\2\w\j\4\w\q\0\q\8\o\0\w\e\e\2\l\v\v\s\y\m\a\u\6\b\1\f\e\5\j\m\c\d\x\o\w\k\n\6\t\o\s\1\s\m\p\f\z\g\f\v\a\x\y\i\d\g\v\0\y\0\m\3\g\a\a\3\d\t\u\z\t\q\4\4\r\q\9\9\n\0\7\s\u\3\i\w\h\m\k\9\n\n\i\u\m\k\n\w\h\5\4\m\7\7\n\r\x\s\m\s\q\5\6\4\i\o\b\9\f\l\4\j\y\m\q\8\k\y\t\4\k\9\z\u\u\0\8\t\a\x\s\4\t\f\v\e\o\v\z\k\n\i\8\6\k\7\i\2\6\l\4\7\g\7\a\r\a\8\b\r\9\k\f\s\r\a\q\0\z\s\6\v\p\4\d\r\h\s\o\k\v\k\z\4\r\s\a\y\z\b\p\e\3\1\j\8\s\b\j\j\s\u\d\8\a\r\4\p\1\s\l\i\r\b\p\5\y\s\t\0\t\f\a\e\x\1\l\j\k\z\n\0\k\d\l\l\7\p\u\s\e\b\j\b\a\v\4\j\7\1\1\t\h\y\2\5\h\9\4\t\b\h\e\h\d\t\n\q\s\e\v\z\o\r\l\2\s\l\t\z\e\4\q\e\h\9\v\0\w\2\8\s\r\x\w\l\k\j\v\t\m\6\y\u\p\q\q\5\4\s\e\v\i\g\o\t\i\l\x\c\d\6\2\a\p\h\0\z\f\6\p\5\7\6\m\0\8\6\8\9\0\a\a\9\i\i\q\d\q\5\h\v\i\v\1\x\a\q\9\x\g\5\u\r\r\d\w\5\k\9\2\3\2\g\x\1\8\v\1\c\x\o\o\x\b\5\p\1\9\8\5\x\x\u\s\l\i\a\e\z\3\0\4\h\k\g\v\w\e\m\6\5\r\m\o\0\k\r\9\3\d\f\m\1\2\3\l\4\v\c\8\e\h\b\f\z\x\d\g\c\f\a\z\y\n\0\6\z\t\b\8\q\n\d\t\g\g\x\0\1\q\a\k\w\v\v\l\x\z\m\4\3\1\r\g\b\w\a\9\c\l\6\k\f\u\a\c\5\z\s\h\a\a\2\w\8\r\n\a\v\e\x\t\t\x\5\j\a\y\k\p\h\k\5\v\n\i\w\d\7\z\c\9\x\y\0\r\r\x\p\2\5\j\5\k\g\m\2\p\4\h\w\k\4\r\b\k\c\d\y\q\k\b\e\d\5\d\k\i\y\5\k\g\g\2\x\v\d\p\8\0\v\m\f\r\b\j\5\p\b ]] 00:24:22.159 00:24:22.159 real 0m1.505s 00:24:22.159 user 0m0.876s 00:24:22.159 sys 0m0.480s 00:24:22.159 14:26:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:22.159 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:24:22.159 ************************************ 00:24:22.159 END TEST dd_rw_offset 00:24:22.159 ************************************ 00:24:22.159 14:26:14 -- dd/basic_rw.sh@1 -- # cleanup 00:24:22.159 14:26:14 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:24:22.159 14:26:14 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:22.159 14:26:14 -- dd/common.sh@11 -- # local nvme_ref= 00:24:22.159 14:26:14 -- dd/common.sh@12 -- # local size=0xffff 00:24:22.159 14:26:14 -- dd/common.sh@14 -- # local bs=1048576 00:24:22.159 14:26:14 -- dd/common.sh@15 -- # local count=1 00:24:22.159 14:26:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:22.159 14:26:14 -- dd/common.sh@18 -- # gen_conf 00:24:22.159 14:26:14 -- dd/common.sh@31 -- # xtrace_disable 00:24:22.159 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:24:22.159 [2024-11-18 14:26:14.125685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:22.159 [2024-11-18 14:26:14.126077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143527 ] 00:24:22.159 { 00:24:22.159 "subsystems": [ 00:24:22.159 { 00:24:22.159 "subsystem": "bdev", 00:24:22.159 "config": [ 00:24:22.159 { 00:24:22.159 "params": { 00:24:22.159 "trtype": "pcie", 00:24:22.159 "traddr": "0000:00:06.0", 00:24:22.159 "name": "Nvme0" 00:24:22.159 }, 00:24:22.159 "method": "bdev_nvme_attach_controller" 00:24:22.159 }, 00:24:22.159 { 00:24:22.159 "method": "bdev_wait_for_examine" 00:24:22.159 } 00:24:22.159 ] 00:24:22.159 } 00:24:22.159 ] 00:24:22.159 } 00:24:22.418 [2024-11-18 14:26:14.276435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.418 [2024-11-18 14:26:14.351236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.677  [2024-11-18T14:26:15.010Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:22.936 00:24:22.936 14:26:14 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:22.936 ************************************ 00:24:22.936 END TEST spdk_dd_basic_rw 00:24:22.936 ************************************ 00:24:22.936 00:24:22.936 real 0m19.045s 00:24:22.936 user 0m12.257s 00:24:22.936 sys 0m5.044s 00:24:22.936 14:26:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:22.936 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:24:22.936 14:26:14 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:24:22.936 14:26:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:22.936 14:26:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:22.936 14:26:14 -- common/autotest_common.sh@10 -- # set +x 00:24:22.936 ************************************ 00:24:22.936 START TEST spdk_dd_posix 00:24:22.936 ************************************ 00:24:22.937 14:26:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:24:22.937 * Looking for test storage... 00:24:22.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:22.937 14:26:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:22.937 14:26:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:22.937 14:26:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:22.937 14:26:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:22.937 14:26:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:22.937 14:26:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:22.937 14:26:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:22.937 14:26:14 -- scripts/common.sh@335 -- # IFS=.-: 00:24:22.937 14:26:14 -- scripts/common.sh@335 -- # read -ra ver1 00:24:22.937 14:26:14 -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.937 14:26:14 -- scripts/common.sh@336 -- # read -ra ver2 00:24:22.937 14:26:14 -- scripts/common.sh@337 -- # local 'op=<' 00:24:22.937 14:26:14 -- scripts/common.sh@339 -- # ver1_l=2 00:24:22.937 14:26:14 -- scripts/common.sh@340 -- # ver2_l=1 00:24:22.937 14:26:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:22.937 14:26:14 -- scripts/common.sh@343 -- # case "$op" in 00:24:22.937 14:26:14 -- scripts/common.sh@344 -- # : 1 00:24:22.937 14:26:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:22.937 14:26:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.937 14:26:14 -- scripts/common.sh@364 -- # decimal 1 00:24:22.937 14:26:14 -- scripts/common.sh@352 -- # local d=1 00:24:22.937 14:26:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.937 14:26:14 -- scripts/common.sh@354 -- # echo 1 00:24:22.937 14:26:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:22.937 14:26:14 -- scripts/common.sh@365 -- # decimal 2 00:24:22.937 14:26:15 -- scripts/common.sh@352 -- # local d=2 00:24:22.937 14:26:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.937 14:26:15 -- scripts/common.sh@354 -- # echo 2 00:24:23.196 14:26:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:23.196 14:26:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:23.196 14:26:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:23.197 14:26:15 -- scripts/common.sh@367 -- # return 0 00:24:23.197 14:26:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.197 14:26:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.197 --rc genhtml_branch_coverage=1 00:24:23.197 --rc genhtml_function_coverage=1 00:24:23.197 --rc genhtml_legend=1 00:24:23.197 --rc geninfo_all_blocks=1 00:24:23.197 --rc geninfo_unexecuted_blocks=1 00:24:23.197 00:24:23.197 ' 00:24:23.197 14:26:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.197 --rc genhtml_branch_coverage=1 00:24:23.197 --rc genhtml_function_coverage=1 00:24:23.197 --rc genhtml_legend=1 00:24:23.197 --rc geninfo_all_blocks=1 00:24:23.197 --rc geninfo_unexecuted_blocks=1 00:24:23.197 00:24:23.197 ' 00:24:23.197 14:26:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.197 --rc genhtml_branch_coverage=1 00:24:23.197 --rc genhtml_function_coverage=1 00:24:23.197 --rc genhtml_legend=1 00:24:23.197 --rc geninfo_all_blocks=1 00:24:23.197 --rc geninfo_unexecuted_blocks=1 00:24:23.197 00:24:23.197 ' 00:24:23.197 14:26:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:23.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.197 --rc genhtml_branch_coverage=1 00:24:23.197 --rc genhtml_function_coverage=1 00:24:23.197 --rc genhtml_legend=1 00:24:23.197 --rc geninfo_all_blocks=1 00:24:23.197 --rc geninfo_unexecuted_blocks=1 00:24:23.197 00:24:23.197 ' 00:24:23.197 14:26:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.197 14:26:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.197 14:26:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.197 14:26:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.197 14:26:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:23.197 14:26:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:23.197 14:26:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:23.197 14:26:15 -- paths/export.sh@5 -- # export PATH 00:24:23.197 14:26:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:23.197 14:26:15 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:24:23.197 14:26:15 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:24:23.197 14:26:15 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:24:23.197 14:26:15 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:24:23.197 14:26:15 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:23.197 14:26:15 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:23.197 14:26:15 -- dd/posix.sh@130 -- # tests 00:24:23.197 14:26:15 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:24:23.197 * First test run, using AIO 00:24:23.197 14:26:15 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:24:23.197 14:26:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:23.197 14:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:23.197 14:26:15 -- common/autotest_common.sh@10 -- # set +x 00:24:23.197 ************************************ 00:24:23.197 START TEST dd_flag_append 00:24:23.197 ************************************ 00:24:23.197 14:26:15 -- common/autotest_common.sh@1114 -- # append 00:24:23.197 14:26:15 -- dd/posix.sh@16 -- # local dump0 00:24:23.197 14:26:15 -- dd/posix.sh@17 -- # local dump1 00:24:23.197 14:26:15 -- dd/posix.sh@19 -- # gen_bytes 32 00:24:23.197 14:26:15 -- dd/common.sh@98 -- # xtrace_disable 00:24:23.197 14:26:15 -- common/autotest_common.sh@10 -- # set +x 00:24:23.197 14:26:15 -- dd/posix.sh@19 -- # dump0=wwv3z9insj9dh56joh4o22zpxltpi2l0 00:24:23.197 14:26:15 -- dd/posix.sh@20 -- # gen_bytes 32 00:24:23.197 14:26:15 -- dd/common.sh@98 -- # xtrace_disable 00:24:23.197 14:26:15 -- common/autotest_common.sh@10 -- # set +x 00:24:23.197 14:26:15 -- dd/posix.sh@20 -- # dump1=gmgmrl6nf7acc0ilvbq8i7gxp1bbmacy 00:24:23.197 14:26:15 -- dd/posix.sh@22 -- # printf %s wwv3z9insj9dh56joh4o22zpxltpi2l0 00:24:23.197 14:26:15 -- dd/posix.sh@23 -- # printf %s gmgmrl6nf7acc0ilvbq8i7gxp1bbmacy 00:24:23.197 14:26:15 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:24:23.197 [2024-11-18 14:26:15.069716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:23.197 [2024-11-18 14:26:15.069916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143607 ] 00:24:23.197 [2024-11-18 14:26:15.208164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.197 [2024-11-18 14:26:15.263801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.457  [2024-11-18T14:26:15.789Z] Copying: 32/32 [B] (average 31 kBps) 00:24:23.715 00:24:23.715 14:26:15 -- dd/posix.sh@27 -- # [[ gmgmrl6nf7acc0ilvbq8i7gxp1bbmacywwv3z9insj9dh56joh4o22zpxltpi2l0 == \g\m\g\m\r\l\6\n\f\7\a\c\c\0\i\l\v\b\q\8\i\7\g\x\p\1\b\b\m\a\c\y\w\w\v\3\z\9\i\n\s\j\9\d\h\5\6\j\o\h\4\o\2\2\z\p\x\l\t\p\i\2\l\0 ]] 00:24:23.715 00:24:23.715 real 0m0.577s 00:24:23.715 user 0m0.266s 00:24:23.715 sys 0m0.174s 00:24:23.715 14:26:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:23.715 ************************************ 00:24:23.715 END TEST dd_flag_append 00:24:23.715 ************************************ 00:24:23.715 14:26:15 -- common/autotest_common.sh@10 -- # set +x 00:24:23.715 14:26:15 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:24:23.715 14:26:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:23.715 14:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:23.715 14:26:15 -- common/autotest_common.sh@10 -- # set +x 00:24:23.715 ************************************ 00:24:23.715 START TEST dd_flag_directory 00:24:23.715 ************************************ 00:24:23.715 14:26:15 -- common/autotest_common.sh@1114 -- # directory 00:24:23.715 14:26:15 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:23.715 14:26:15 -- common/autotest_common.sh@650 -- # local es=0 00:24:23.715 14:26:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:23.715 14:26:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.715 14:26:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.715 14:26:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.716 14:26:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.716 14:26:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.716 14:26:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:23.716 14:26:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.716 14:26:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:23.716 14:26:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:23.716 [2024-11-18 14:26:15.716545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:23.716 [2024-11-18 14:26:15.716802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143642 ] 00:24:23.975 [2024-11-18 14:26:15.864075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.975 [2024-11-18 14:26:15.934473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.975 [2024-11-18 14:26:16.020214] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:23.975 [2024-11-18 14:26:16.020531] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:23.975 [2024-11-18 14:26:16.020623] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:24.232 [2024-11-18 14:26:16.138108] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:24.232 14:26:16 -- common/autotest_common.sh@653 -- # es=236 00:24:24.232 14:26:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:24.232 14:26:16 -- common/autotest_common.sh@662 -- # es=108 00:24:24.232 14:26:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:24.232 14:26:16 -- common/autotest_common.sh@670 -- # es=1 00:24:24.232 14:26:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:24.232 14:26:16 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:24:24.233 14:26:16 -- common/autotest_common.sh@650 -- # local es=0 00:24:24.233 14:26:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:24:24.233 14:26:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:24.233 14:26:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.233 14:26:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:24.233 14:26:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.233 14:26:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:24.233 14:26:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:24.233 14:26:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:24.233 14:26:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:24.233 14:26:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:24:24.233 [2024-11-18 14:26:16.303564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:24.233 [2024-11-18 14:26:16.303787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143662 ] 00:24:24.491 [2024-11-18 14:26:16.449376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.491 [2024-11-18 14:26:16.520513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.750 [2024-11-18 14:26:16.607492] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:24.750 [2024-11-18 14:26:16.607950] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:24.750 [2024-11-18 14:26:16.608181] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:24.750 [2024-11-18 14:26:16.722793] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:25.009 14:26:16 -- common/autotest_common.sh@653 -- # es=236 00:24:25.009 14:26:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.009 14:26:16 -- common/autotest_common.sh@662 -- # es=108 00:24:25.009 14:26:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:25.009 14:26:16 -- common/autotest_common.sh@670 -- # es=1 00:24:25.009 14:26:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.009 00:24:25.009 real 0m1.170s 00:24:25.009 user 0m0.562s 00:24:25.009 sys 0m0.405s 00:24:25.009 14:26:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:25.009 14:26:16 -- common/autotest_common.sh@10 -- # set +x 00:24:25.009 ************************************ 00:24:25.009 END TEST dd_flag_directory 00:24:25.009 ************************************ 00:24:25.009 14:26:16 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:24:25.009 14:26:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:25.009 14:26:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.009 14:26:16 -- common/autotest_common.sh@10 -- # set +x 00:24:25.009 ************************************ 00:24:25.009 START TEST dd_flag_nofollow 00:24:25.009 ************************************ 00:24:25.009 14:26:16 -- common/autotest_common.sh@1114 -- # nofollow 00:24:25.009 14:26:16 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:24:25.009 14:26:16 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:24:25.009 14:26:16 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:24:25.009 14:26:16 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:24:25.009 14:26:16 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:25.009 14:26:16 -- common/autotest_common.sh@650 -- # local es=0 00:24:25.009 14:26:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:25.009 14:26:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.009 14:26:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.009 14:26:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.009 14:26:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.009 14:26:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.009 14:26:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.009 14:26:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.009 14:26:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:25.010 14:26:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:25.010 [2024-11-18 14:26:16.940246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:25.010 [2024-11-18 14:26:16.940478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143691 ] 00:24:25.269 [2024-11-18 14:26:17.088747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.269 [2024-11-18 14:26:17.165831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.269 [2024-11-18 14:26:17.253007] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:24:25.269 [2024-11-18 14:26:17.253360] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:24:25.269 [2024-11-18 14:26:17.253435] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:25.528 [2024-11-18 14:26:17.371917] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:25.528 14:26:17 -- common/autotest_common.sh@653 -- # es=216 00:24:25.528 14:26:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.528 14:26:17 -- common/autotest_common.sh@662 -- # es=88 00:24:25.528 14:26:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:25.528 14:26:17 -- common/autotest_common.sh@670 -- # es=1 00:24:25.528 14:26:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.528 14:26:17 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:25.528 14:26:17 -- common/autotest_common.sh@650 -- # local es=0 00:24:25.528 14:26:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:25.528 14:26:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.528 14:26:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.528 14:26:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.528 14:26:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.528 14:26:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.528 14:26:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:25.528 14:26:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:25.528 14:26:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:25.528 14:26:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:25.528 [2024-11-18 14:26:17.530534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:25.528 [2024-11-18 14:26:17.530827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143711 ] 00:24:25.787 [2024-11-18 14:26:17.676350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.787 [2024-11-18 14:26:17.740627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.787 [2024-11-18 14:26:17.822849] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:24:25.787 [2024-11-18 14:26:17.823313] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:24:25.787 [2024-11-18 14:26:17.823464] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:26.046 [2024-11-18 14:26:17.937081] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:26.046 14:26:18 -- common/autotest_common.sh@653 -- # es=216 00:24:26.046 14:26:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:26.046 14:26:18 -- common/autotest_common.sh@662 -- # es=88 00:24:26.046 14:26:18 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:26.046 14:26:18 -- common/autotest_common.sh@670 -- # es=1 00:24:26.046 14:26:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:26.046 14:26:18 -- dd/posix.sh@46 -- # gen_bytes 512 00:24:26.046 14:26:18 -- dd/common.sh@98 -- # xtrace_disable 00:24:26.046 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:26.046 14:26:18 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:26.046 [2024-11-18 14:26:18.085883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:26.046 [2024-11-18 14:26:18.086112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143715 ] 00:24:26.305 [2024-11-18 14:26:18.222203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.305 [2024-11-18 14:26:18.282713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.305  [2024-11-18T14:26:18.637Z] Copying: 512/512 [B] (average 500 kBps) 00:24:26.563 00:24:26.822 14:26:18 -- dd/posix.sh@49 -- # [[ ih8i6ckzsz5c6m4gvtlm2g8kz4wluicdjb9d5i34yvyw8zmby45i3s4hudfiuz01smfrn6fs38av2xzgjkobyivk5qjm2ur6hbuzd9k2o3t2xdyi6iqyr780fsnts77f2le1q8q0qdv0lta9j2bvfjv3axzd5hi9cm0nm3smdwb976xzkjo7yj75x16r0wmy58b4bmwkkos2vhireitfsutk42h7mjsk8mhybl560b2sdlk6uhh59pzv1zrpphiuy94ek1cp0irhp7cu3tqz9oqc66gwb5k872ue16xyw7v578c29v4jek78v1j24rcj00uffea776j96phv1ih3pb0wc0gobgkq87t4mznzqpk4cql9lprgyu7trrmec8cwmnouysi0762w4tini6rusabsj3re041hkt9oglfvad0xtwfmb48xn503z1irtg8bnfkmtrnxw9lk2gpo32c5rktv66f6o6dn6f6jnzpgg5h11sff57lfdbz0efbiy50q == \i\h\8\i\6\c\k\z\s\z\5\c\6\m\4\g\v\t\l\m\2\g\8\k\z\4\w\l\u\i\c\d\j\b\9\d\5\i\3\4\y\v\y\w\8\z\m\b\y\4\5\i\3\s\4\h\u\d\f\i\u\z\0\1\s\m\f\r\n\6\f\s\3\8\a\v\2\x\z\g\j\k\o\b\y\i\v\k\5\q\j\m\2\u\r\6\h\b\u\z\d\9\k\2\o\3\t\2\x\d\y\i\6\i\q\y\r\7\8\0\f\s\n\t\s\7\7\f\2\l\e\1\q\8\q\0\q\d\v\0\l\t\a\9\j\2\b\v\f\j\v\3\a\x\z\d\5\h\i\9\c\m\0\n\m\3\s\m\d\w\b\9\7\6\x\z\k\j\o\7\y\j\7\5\x\1\6\r\0\w\m\y\5\8\b\4\b\m\w\k\k\o\s\2\v\h\i\r\e\i\t\f\s\u\t\k\4\2\h\7\m\j\s\k\8\m\h\y\b\l\5\6\0\b\2\s\d\l\k\6\u\h\h\5\9\p\z\v\1\z\r\p\p\h\i\u\y\9\4\e\k\1\c\p\0\i\r\h\p\7\c\u\3\t\q\z\9\o\q\c\6\6\g\w\b\5\k\8\7\2\u\e\1\6\x\y\w\7\v\5\7\8\c\2\9\v\4\j\e\k\7\8\v\1\j\2\4\r\c\j\0\0\u\f\f\e\a\7\7\6\j\9\6\p\h\v\1\i\h\3\p\b\0\w\c\0\g\o\b\g\k\q\8\7\t\4\m\z\n\z\q\p\k\4\c\q\l\9\l\p\r\g\y\u\7\t\r\r\m\e\c\8\c\w\m\n\o\u\y\s\i\0\7\6\2\w\4\t\i\n\i\6\r\u\s\a\b\s\j\3\r\e\0\4\1\h\k\t\9\o\g\l\f\v\a\d\0\x\t\w\f\m\b\4\8\x\n\5\0\3\z\1\i\r\t\g\8\b\n\f\k\m\t\r\n\x\w\9\l\k\2\g\p\o\3\2\c\5\r\k\t\v\6\6\f\6\o\6\d\n\6\f\6\j\n\z\p\g\g\5\h\1\1\s\f\f\5\7\l\f\d\b\z\0\e\f\b\i\y\5\0\q ]] 00:24:26.822 00:24:26.822 real 0m1.760s 00:24:26.822 user 0m0.882s 00:24:26.822 sys 0m0.541s 00:24:26.822 14:26:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:26.822 ************************************ 00:24:26.822 END TEST dd_flag_nofollow 00:24:26.822 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:26.822 ************************************ 00:24:26.822 14:26:18 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:24:26.822 14:26:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:26.822 14:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:26.822 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:26.822 ************************************ 00:24:26.822 START TEST dd_flag_noatime 00:24:26.822 ************************************ 00:24:26.822 14:26:18 -- common/autotest_common.sh@1114 -- # noatime 00:24:26.822 14:26:18 -- dd/posix.sh@53 -- # local atime_if 00:24:26.822 14:26:18 -- dd/posix.sh@54 -- # local atime_of 00:24:26.822 14:26:18 -- dd/posix.sh@58 -- # gen_bytes 512 00:24:26.822 14:26:18 -- dd/common.sh@98 -- # xtrace_disable 00:24:26.822 14:26:18 -- common/autotest_common.sh@10 -- # set +x 00:24:26.822 14:26:18 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:26.822 14:26:18 -- dd/posix.sh@60 -- # atime_if=1731939978 00:24:26.822 14:26:18 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:26.822 14:26:18 -- dd/posix.sh@61 -- # atime_of=1731939978 00:24:26.822 14:26:18 -- dd/posix.sh@66 -- # sleep 1 00:24:27.758 14:26:19 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:27.758 [2024-11-18 14:26:19.783882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:27.758 [2024-11-18 14:26:19.784181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143767 ] 00:24:28.017 [2024-11-18 14:26:19.935533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.017 [2024-11-18 14:26:19.999684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.275  [2024-11-18T14:26:20.607Z] Copying: 512/512 [B] (average 500 kBps) 00:24:28.533 00:24:28.533 14:26:20 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:28.533 14:26:20 -- dd/posix.sh@69 -- # (( atime_if == 1731939978 )) 00:24:28.533 14:26:20 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:28.533 14:26:20 -- dd/posix.sh@70 -- # (( atime_of == 1731939978 )) 00:24:28.533 14:26:20 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:28.533 [2024-11-18 14:26:20.445908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:28.533 [2024-11-18 14:26:20.446132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143786 ] 00:24:28.533 [2024-11-18 14:26:20.592466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.792 [2024-11-18 14:26:20.662834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.792  [2024-11-18T14:26:21.126Z] Copying: 512/512 [B] (average 500 kBps) 00:24:29.052 00:24:29.052 14:26:21 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:29.052 14:26:21 -- dd/posix.sh@73 -- # (( atime_if < 1731939980 )) 00:24:29.052 00:24:29.052 real 0m2.348s 00:24:29.052 user 0m0.634s 00:24:29.052 sys 0m0.414s 00:24:29.052 14:26:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:29.052 14:26:21 -- common/autotest_common.sh@10 -- # set +x 00:24:29.052 ************************************ 00:24:29.052 END TEST dd_flag_noatime 00:24:29.052 ************************************ 00:24:29.052 14:26:21 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:24:29.052 14:26:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:29.052 14:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:29.052 14:26:21 -- common/autotest_common.sh@10 -- # set +x 00:24:29.052 ************************************ 00:24:29.052 START TEST dd_flags_misc 00:24:29.052 ************************************ 00:24:29.052 14:26:21 -- common/autotest_common.sh@1114 -- # io 00:24:29.052 14:26:21 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:24:29.052 14:26:21 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:24:29.052 14:26:21 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:24:29.052 14:26:21 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:24:29.052 14:26:21 -- dd/posix.sh@86 -- # gen_bytes 512 00:24:29.052 14:26:21 -- dd/common.sh@98 -- # xtrace_disable 00:24:29.052 14:26:21 -- common/autotest_common.sh@10 -- # set +x 00:24:29.052 14:26:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:29.052 14:26:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:24:29.337 [2024-11-18 14:26:21.157608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:29.337 [2024-11-18 14:26:21.157866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143816 ] 00:24:29.337 [2024-11-18 14:26:21.302015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.337 [2024-11-18 14:26:21.365021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.655  [2024-11-18T14:26:21.993Z] Copying: 512/512 [B] (average 500 kBps) 00:24:29.919 00:24:29.919 14:26:21 -- dd/posix.sh@93 -- # [[ 56enb7tcpm7kyrm3m8el3296h5ybnkrgop3kwcsozusve7k0xwsj4ejqrmhht2vrhy6jgnpla555vfq12zc2x3gcqzb8a62yu3uzvvdhnip0pht4ahp6b9rowb2q37slatmiaiviy1vaednca08chd2h77daq27zny735zl0fnlqigj9trprt4ffx6qlv6l6thx068w34c6vssvpcau14o0bjudzcrz5hlyvsv6k2to17k4vnkzswxuncza558um0suj5k44fu95xf3jgobh61fcz2ggt1w4kxyxxc1j0407920ei4p79ssp3dw5p4nz1cd3v0fncwig149wvh7ph5l88qy1ej7svttc90fpwavf7dx5vk9pni8t2azjm27dzw935c7k1101s1teq1xrhyzgac22pgvcyyzhxg6n11c1rk9f251m7q0gwj2rqvmftx1a8iq3gqv71h9c0fcltncm7gktj26ko3njmkd71ujrqs7egatbkheayre4vhy3 == \5\6\e\n\b\7\t\c\p\m\7\k\y\r\m\3\m\8\e\l\3\2\9\6\h\5\y\b\n\k\r\g\o\p\3\k\w\c\s\o\z\u\s\v\e\7\k\0\x\w\s\j\4\e\j\q\r\m\h\h\t\2\v\r\h\y\6\j\g\n\p\l\a\5\5\5\v\f\q\1\2\z\c\2\x\3\g\c\q\z\b\8\a\6\2\y\u\3\u\z\v\v\d\h\n\i\p\0\p\h\t\4\a\h\p\6\b\9\r\o\w\b\2\q\3\7\s\l\a\t\m\i\a\i\v\i\y\1\v\a\e\d\n\c\a\0\8\c\h\d\2\h\7\7\d\a\q\2\7\z\n\y\7\3\5\z\l\0\f\n\l\q\i\g\j\9\t\r\p\r\t\4\f\f\x\6\q\l\v\6\l\6\t\h\x\0\6\8\w\3\4\c\6\v\s\s\v\p\c\a\u\1\4\o\0\b\j\u\d\z\c\r\z\5\h\l\y\v\s\v\6\k\2\t\o\1\7\k\4\v\n\k\z\s\w\x\u\n\c\z\a\5\5\8\u\m\0\s\u\j\5\k\4\4\f\u\9\5\x\f\3\j\g\o\b\h\6\1\f\c\z\2\g\g\t\1\w\4\k\x\y\x\x\c\1\j\0\4\0\7\9\2\0\e\i\4\p\7\9\s\s\p\3\d\w\5\p\4\n\z\1\c\d\3\v\0\f\n\c\w\i\g\1\4\9\w\v\h\7\p\h\5\l\8\8\q\y\1\e\j\7\s\v\t\t\c\9\0\f\p\w\a\v\f\7\d\x\5\v\k\9\p\n\i\8\t\2\a\z\j\m\2\7\d\z\w\9\3\5\c\7\k\1\1\0\1\s\1\t\e\q\1\x\r\h\y\z\g\a\c\2\2\p\g\v\c\y\y\z\h\x\g\6\n\1\1\c\1\r\k\9\f\2\5\1\m\7\q\0\g\w\j\2\r\q\v\m\f\t\x\1\a\8\i\q\3\g\q\v\7\1\h\9\c\0\f\c\l\t\n\c\m\7\g\k\t\j\2\6\k\o\3\n\j\m\k\d\7\1\u\j\r\q\s\7\e\g\a\t\b\k\h\e\a\y\r\e\4\v\h\y\3 ]] 00:24:29.919 14:26:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:29.919 14:26:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:24:29.919 [2024-11-18 14:26:21.766186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:29.919 [2024-11-18 14:26:21.766430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143831 ] 00:24:29.919 [2024-11-18 14:26:21.904479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.919 [2024-11-18 14:26:21.967683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.178  [2024-11-18T14:26:22.511Z] Copying: 512/512 [B] (average 500 kBps) 00:24:30.437 00:24:30.437 14:26:22 -- dd/posix.sh@93 -- # [[ 56enb7tcpm7kyrm3m8el3296h5ybnkrgop3kwcsozusve7k0xwsj4ejqrmhht2vrhy6jgnpla555vfq12zc2x3gcqzb8a62yu3uzvvdhnip0pht4ahp6b9rowb2q37slatmiaiviy1vaednca08chd2h77daq27zny735zl0fnlqigj9trprt4ffx6qlv6l6thx068w34c6vssvpcau14o0bjudzcrz5hlyvsv6k2to17k4vnkzswxuncza558um0suj5k44fu95xf3jgobh61fcz2ggt1w4kxyxxc1j0407920ei4p79ssp3dw5p4nz1cd3v0fncwig149wvh7ph5l88qy1ej7svttc90fpwavf7dx5vk9pni8t2azjm27dzw935c7k1101s1teq1xrhyzgac22pgvcyyzhxg6n11c1rk9f251m7q0gwj2rqvmftx1a8iq3gqv71h9c0fcltncm7gktj26ko3njmkd71ujrqs7egatbkheayre4vhy3 == \5\6\e\n\b\7\t\c\p\m\7\k\y\r\m\3\m\8\e\l\3\2\9\6\h\5\y\b\n\k\r\g\o\p\3\k\w\c\s\o\z\u\s\v\e\7\k\0\x\w\s\j\4\e\j\q\r\m\h\h\t\2\v\r\h\y\6\j\g\n\p\l\a\5\5\5\v\f\q\1\2\z\c\2\x\3\g\c\q\z\b\8\a\6\2\y\u\3\u\z\v\v\d\h\n\i\p\0\p\h\t\4\a\h\p\6\b\9\r\o\w\b\2\q\3\7\s\l\a\t\m\i\a\i\v\i\y\1\v\a\e\d\n\c\a\0\8\c\h\d\2\h\7\7\d\a\q\2\7\z\n\y\7\3\5\z\l\0\f\n\l\q\i\g\j\9\t\r\p\r\t\4\f\f\x\6\q\l\v\6\l\6\t\h\x\0\6\8\w\3\4\c\6\v\s\s\v\p\c\a\u\1\4\o\0\b\j\u\d\z\c\r\z\5\h\l\y\v\s\v\6\k\2\t\o\1\7\k\4\v\n\k\z\s\w\x\u\n\c\z\a\5\5\8\u\m\0\s\u\j\5\k\4\4\f\u\9\5\x\f\3\j\g\o\b\h\6\1\f\c\z\2\g\g\t\1\w\4\k\x\y\x\x\c\1\j\0\4\0\7\9\2\0\e\i\4\p\7\9\s\s\p\3\d\w\5\p\4\n\z\1\c\d\3\v\0\f\n\c\w\i\g\1\4\9\w\v\h\7\p\h\5\l\8\8\q\y\1\e\j\7\s\v\t\t\c\9\0\f\p\w\a\v\f\7\d\x\5\v\k\9\p\n\i\8\t\2\a\z\j\m\2\7\d\z\w\9\3\5\c\7\k\1\1\0\1\s\1\t\e\q\1\x\r\h\y\z\g\a\c\2\2\p\g\v\c\y\y\z\h\x\g\6\n\1\1\c\1\r\k\9\f\2\5\1\m\7\q\0\g\w\j\2\r\q\v\m\f\t\x\1\a\8\i\q\3\g\q\v\7\1\h\9\c\0\f\c\l\t\n\c\m\7\g\k\t\j\2\6\k\o\3\n\j\m\k\d\7\1\u\j\r\q\s\7\e\g\a\t\b\k\h\e\a\y\r\e\4\v\h\y\3 ]] 00:24:30.437 14:26:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:30.437 14:26:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:24:30.437 [2024-11-18 14:26:22.378722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:30.437 [2024-11-18 14:26:22.379470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143847 ] 00:24:30.697 [2024-11-18 14:26:22.524795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.697 [2024-11-18 14:26:22.593497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.697  [2024-11-18T14:26:23.030Z] Copying: 512/512 [B] (average 166 kBps) 00:24:30.956 00:24:30.956 14:26:22 -- dd/posix.sh@93 -- # [[ 56enb7tcpm7kyrm3m8el3296h5ybnkrgop3kwcsozusve7k0xwsj4ejqrmhht2vrhy6jgnpla555vfq12zc2x3gcqzb8a62yu3uzvvdhnip0pht4ahp6b9rowb2q37slatmiaiviy1vaednca08chd2h77daq27zny735zl0fnlqigj9trprt4ffx6qlv6l6thx068w34c6vssvpcau14o0bjudzcrz5hlyvsv6k2to17k4vnkzswxuncza558um0suj5k44fu95xf3jgobh61fcz2ggt1w4kxyxxc1j0407920ei4p79ssp3dw5p4nz1cd3v0fncwig149wvh7ph5l88qy1ej7svttc90fpwavf7dx5vk9pni8t2azjm27dzw935c7k1101s1teq1xrhyzgac22pgvcyyzhxg6n11c1rk9f251m7q0gwj2rqvmftx1a8iq3gqv71h9c0fcltncm7gktj26ko3njmkd71ujrqs7egatbkheayre4vhy3 == \5\6\e\n\b\7\t\c\p\m\7\k\y\r\m\3\m\8\e\l\3\2\9\6\h\5\y\b\n\k\r\g\o\p\3\k\w\c\s\o\z\u\s\v\e\7\k\0\x\w\s\j\4\e\j\q\r\m\h\h\t\2\v\r\h\y\6\j\g\n\p\l\a\5\5\5\v\f\q\1\2\z\c\2\x\3\g\c\q\z\b\8\a\6\2\y\u\3\u\z\v\v\d\h\n\i\p\0\p\h\t\4\a\h\p\6\b\9\r\o\w\b\2\q\3\7\s\l\a\t\m\i\a\i\v\i\y\1\v\a\e\d\n\c\a\0\8\c\h\d\2\h\7\7\d\a\q\2\7\z\n\y\7\3\5\z\l\0\f\n\l\q\i\g\j\9\t\r\p\r\t\4\f\f\x\6\q\l\v\6\l\6\t\h\x\0\6\8\w\3\4\c\6\v\s\s\v\p\c\a\u\1\4\o\0\b\j\u\d\z\c\r\z\5\h\l\y\v\s\v\6\k\2\t\o\1\7\k\4\v\n\k\z\s\w\x\u\n\c\z\a\5\5\8\u\m\0\s\u\j\5\k\4\4\f\u\9\5\x\f\3\j\g\o\b\h\6\1\f\c\z\2\g\g\t\1\w\4\k\x\y\x\x\c\1\j\0\4\0\7\9\2\0\e\i\4\p\7\9\s\s\p\3\d\w\5\p\4\n\z\1\c\d\3\v\0\f\n\c\w\i\g\1\4\9\w\v\h\7\p\h\5\l\8\8\q\y\1\e\j\7\s\v\t\t\c\9\0\f\p\w\a\v\f\7\d\x\5\v\k\9\p\n\i\8\t\2\a\z\j\m\2\7\d\z\w\9\3\5\c\7\k\1\1\0\1\s\1\t\e\q\1\x\r\h\y\z\g\a\c\2\2\p\g\v\c\y\y\z\h\x\g\6\n\1\1\c\1\r\k\9\f\2\5\1\m\7\q\0\g\w\j\2\r\q\v\m\f\t\x\1\a\8\i\q\3\g\q\v\7\1\h\9\c\0\f\c\l\t\n\c\m\7\g\k\t\j\2\6\k\o\3\n\j\m\k\d\7\1\u\j\r\q\s\7\e\g\a\t\b\k\h\e\a\y\r\e\4\v\h\y\3 ]] 00:24:30.956 14:26:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:30.956 14:26:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:24:30.956 [2024-11-18 14:26:22.995123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:30.956 [2024-11-18 14:26:22.995366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143853 ] 00:24:31.215 [2024-11-18 14:26:23.140909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.215 [2024-11-18 14:26:23.209636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.474  [2024-11-18T14:26:23.808Z] Copying: 512/512 [B] (average 250 kBps) 00:24:31.734 00:24:31.734 14:26:23 -- dd/posix.sh@93 -- # [[ 56enb7tcpm7kyrm3m8el3296h5ybnkrgop3kwcsozusve7k0xwsj4ejqrmhht2vrhy6jgnpla555vfq12zc2x3gcqzb8a62yu3uzvvdhnip0pht4ahp6b9rowb2q37slatmiaiviy1vaednca08chd2h77daq27zny735zl0fnlqigj9trprt4ffx6qlv6l6thx068w34c6vssvpcau14o0bjudzcrz5hlyvsv6k2to17k4vnkzswxuncza558um0suj5k44fu95xf3jgobh61fcz2ggt1w4kxyxxc1j0407920ei4p79ssp3dw5p4nz1cd3v0fncwig149wvh7ph5l88qy1ej7svttc90fpwavf7dx5vk9pni8t2azjm27dzw935c7k1101s1teq1xrhyzgac22pgvcyyzhxg6n11c1rk9f251m7q0gwj2rqvmftx1a8iq3gqv71h9c0fcltncm7gktj26ko3njmkd71ujrqs7egatbkheayre4vhy3 == \5\6\e\n\b\7\t\c\p\m\7\k\y\r\m\3\m\8\e\l\3\2\9\6\h\5\y\b\n\k\r\g\o\p\3\k\w\c\s\o\z\u\s\v\e\7\k\0\x\w\s\j\4\e\j\q\r\m\h\h\t\2\v\r\h\y\6\j\g\n\p\l\a\5\5\5\v\f\q\1\2\z\c\2\x\3\g\c\q\z\b\8\a\6\2\y\u\3\u\z\v\v\d\h\n\i\p\0\p\h\t\4\a\h\p\6\b\9\r\o\w\b\2\q\3\7\s\l\a\t\m\i\a\i\v\i\y\1\v\a\e\d\n\c\a\0\8\c\h\d\2\h\7\7\d\a\q\2\7\z\n\y\7\3\5\z\l\0\f\n\l\q\i\g\j\9\t\r\p\r\t\4\f\f\x\6\q\l\v\6\l\6\t\h\x\0\6\8\w\3\4\c\6\v\s\s\v\p\c\a\u\1\4\o\0\b\j\u\d\z\c\r\z\5\h\l\y\v\s\v\6\k\2\t\o\1\7\k\4\v\n\k\z\s\w\x\u\n\c\z\a\5\5\8\u\m\0\s\u\j\5\k\4\4\f\u\9\5\x\f\3\j\g\o\b\h\6\1\f\c\z\2\g\g\t\1\w\4\k\x\y\x\x\c\1\j\0\4\0\7\9\2\0\e\i\4\p\7\9\s\s\p\3\d\w\5\p\4\n\z\1\c\d\3\v\0\f\n\c\w\i\g\1\4\9\w\v\h\7\p\h\5\l\8\8\q\y\1\e\j\7\s\v\t\t\c\9\0\f\p\w\a\v\f\7\d\x\5\v\k\9\p\n\i\8\t\2\a\z\j\m\2\7\d\z\w\9\3\5\c\7\k\1\1\0\1\s\1\t\e\q\1\x\r\h\y\z\g\a\c\2\2\p\g\v\c\y\y\z\h\x\g\6\n\1\1\c\1\r\k\9\f\2\5\1\m\7\q\0\g\w\j\2\r\q\v\m\f\t\x\1\a\8\i\q\3\g\q\v\7\1\h\9\c\0\f\c\l\t\n\c\m\7\g\k\t\j\2\6\k\o\3\n\j\m\k\d\7\1\u\j\r\q\s\7\e\g\a\t\b\k\h\e\a\y\r\e\4\v\h\y\3 ]] 00:24:31.734 14:26:23 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:24:31.734 14:26:23 -- dd/posix.sh@86 -- # gen_bytes 512 00:24:31.734 14:26:23 -- dd/common.sh@98 -- # xtrace_disable 00:24:31.734 14:26:23 -- common/autotest_common.sh@10 -- # set +x 00:24:31.734 14:26:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:31.735 14:26:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:24:31.735 [2024-11-18 14:26:23.633174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:31.735 [2024-11-18 14:26:23.633429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143869 ] 00:24:31.735 [2024-11-18 14:26:23.780351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.994 [2024-11-18 14:26:23.852492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.994  [2024-11-18T14:26:24.327Z] Copying: 512/512 [B] (average 500 kBps) 00:24:32.253 00:24:32.253 14:26:24 -- dd/posix.sh@93 -- # [[ v5jcytd9u0vq50bd03b8584j15jm7j52k21g6adynbyfpjhd1251dhl9t0udbkmpfr2ozcy1v55fayt4fkv592d1lqpdjxjvoug3hlc4atnvir0230a1vteuet9puxjqwr55x0mby9vl2qw4mllzo8eowbr1k33shpbxgr1lxafjioalr58roytfjd9enbkwjk2dnn5sntvl3jyfkdvxgz2t4vdc3gjusljq78metbrqtgod7tltzn2206251fn6y1nmmp4a71uw4y7w3rtltveok2p34t9mpt8ox3jixnkfaprj6ty87rgyk1z7b1amfmpw0wpet5ovddxpf7e1z89bcw5424hkmbr56lmh367a6v3air0wyfqddcz9i08vaby17rh7bxd9czmxrjqe7djln4img410v50n09v9vri6679i008gumtmabgnokd1qml9pmvxlpizn53l0g4fc8vgiq7zkh4onqlbi6mr3hjai9d5yle2ubbelb96sey5 == \v\5\j\c\y\t\d\9\u\0\v\q\5\0\b\d\0\3\b\8\5\8\4\j\1\5\j\m\7\j\5\2\k\2\1\g\6\a\d\y\n\b\y\f\p\j\h\d\1\2\5\1\d\h\l\9\t\0\u\d\b\k\m\p\f\r\2\o\z\c\y\1\v\5\5\f\a\y\t\4\f\k\v\5\9\2\d\1\l\q\p\d\j\x\j\v\o\u\g\3\h\l\c\4\a\t\n\v\i\r\0\2\3\0\a\1\v\t\e\u\e\t\9\p\u\x\j\q\w\r\5\5\x\0\m\b\y\9\v\l\2\q\w\4\m\l\l\z\o\8\e\o\w\b\r\1\k\3\3\s\h\p\b\x\g\r\1\l\x\a\f\j\i\o\a\l\r\5\8\r\o\y\t\f\j\d\9\e\n\b\k\w\j\k\2\d\n\n\5\s\n\t\v\l\3\j\y\f\k\d\v\x\g\z\2\t\4\v\d\c\3\g\j\u\s\l\j\q\7\8\m\e\t\b\r\q\t\g\o\d\7\t\l\t\z\n\2\2\0\6\2\5\1\f\n\6\y\1\n\m\m\p\4\a\7\1\u\w\4\y\7\w\3\r\t\l\t\v\e\o\k\2\p\3\4\t\9\m\p\t\8\o\x\3\j\i\x\n\k\f\a\p\r\j\6\t\y\8\7\r\g\y\k\1\z\7\b\1\a\m\f\m\p\w\0\w\p\e\t\5\o\v\d\d\x\p\f\7\e\1\z\8\9\b\c\w\5\4\2\4\h\k\m\b\r\5\6\l\m\h\3\6\7\a\6\v\3\a\i\r\0\w\y\f\q\d\d\c\z\9\i\0\8\v\a\b\y\1\7\r\h\7\b\x\d\9\c\z\m\x\r\j\q\e\7\d\j\l\n\4\i\m\g\4\1\0\v\5\0\n\0\9\v\9\v\r\i\6\6\7\9\i\0\0\8\g\u\m\t\m\a\b\g\n\o\k\d\1\q\m\l\9\p\m\v\x\l\p\i\z\n\5\3\l\0\g\4\f\c\8\v\g\i\q\7\z\k\h\4\o\n\q\l\b\i\6\m\r\3\h\j\a\i\9\d\5\y\l\e\2\u\b\b\e\l\b\9\6\s\e\y\5 ]] 00:24:32.253 14:26:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:32.253 14:26:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:24:32.253 [2024-11-18 14:26:24.252854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:32.253 [2024-11-18 14:26:24.253030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143875 ] 00:24:32.512 [2024-11-18 14:26:24.391840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.512 [2024-11-18 14:26:24.453158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.512  [2024-11-18T14:26:24.845Z] Copying: 512/512 [B] (average 500 kBps) 00:24:32.771 00:24:32.771 14:26:24 -- dd/posix.sh@93 -- # [[ v5jcytd9u0vq50bd03b8584j15jm7j52k21g6adynbyfpjhd1251dhl9t0udbkmpfr2ozcy1v55fayt4fkv592d1lqpdjxjvoug3hlc4atnvir0230a1vteuet9puxjqwr55x0mby9vl2qw4mllzo8eowbr1k33shpbxgr1lxafjioalr58roytfjd9enbkwjk2dnn5sntvl3jyfkdvxgz2t4vdc3gjusljq78metbrqtgod7tltzn2206251fn6y1nmmp4a71uw4y7w3rtltveok2p34t9mpt8ox3jixnkfaprj6ty87rgyk1z7b1amfmpw0wpet5ovddxpf7e1z89bcw5424hkmbr56lmh367a6v3air0wyfqddcz9i08vaby17rh7bxd9czmxrjqe7djln4img410v50n09v9vri6679i008gumtmabgnokd1qml9pmvxlpizn53l0g4fc8vgiq7zkh4onqlbi6mr3hjai9d5yle2ubbelb96sey5 == \v\5\j\c\y\t\d\9\u\0\v\q\5\0\b\d\0\3\b\8\5\8\4\j\1\5\j\m\7\j\5\2\k\2\1\g\6\a\d\y\n\b\y\f\p\j\h\d\1\2\5\1\d\h\l\9\t\0\u\d\b\k\m\p\f\r\2\o\z\c\y\1\v\5\5\f\a\y\t\4\f\k\v\5\9\2\d\1\l\q\p\d\j\x\j\v\o\u\g\3\h\l\c\4\a\t\n\v\i\r\0\2\3\0\a\1\v\t\e\u\e\t\9\p\u\x\j\q\w\r\5\5\x\0\m\b\y\9\v\l\2\q\w\4\m\l\l\z\o\8\e\o\w\b\r\1\k\3\3\s\h\p\b\x\g\r\1\l\x\a\f\j\i\o\a\l\r\5\8\r\o\y\t\f\j\d\9\e\n\b\k\w\j\k\2\d\n\n\5\s\n\t\v\l\3\j\y\f\k\d\v\x\g\z\2\t\4\v\d\c\3\g\j\u\s\l\j\q\7\8\m\e\t\b\r\q\t\g\o\d\7\t\l\t\z\n\2\2\0\6\2\5\1\f\n\6\y\1\n\m\m\p\4\a\7\1\u\w\4\y\7\w\3\r\t\l\t\v\e\o\k\2\p\3\4\t\9\m\p\t\8\o\x\3\j\i\x\n\k\f\a\p\r\j\6\t\y\8\7\r\g\y\k\1\z\7\b\1\a\m\f\m\p\w\0\w\p\e\t\5\o\v\d\d\x\p\f\7\e\1\z\8\9\b\c\w\5\4\2\4\h\k\m\b\r\5\6\l\m\h\3\6\7\a\6\v\3\a\i\r\0\w\y\f\q\d\d\c\z\9\i\0\8\v\a\b\y\1\7\r\h\7\b\x\d\9\c\z\m\x\r\j\q\e\7\d\j\l\n\4\i\m\g\4\1\0\v\5\0\n\0\9\v\9\v\r\i\6\6\7\9\i\0\0\8\g\u\m\t\m\a\b\g\n\o\k\d\1\q\m\l\9\p\m\v\x\l\p\i\z\n\5\3\l\0\g\4\f\c\8\v\g\i\q\7\z\k\h\4\o\n\q\l\b\i\6\m\r\3\h\j\a\i\9\d\5\y\l\e\2\u\b\b\e\l\b\9\6\s\e\y\5 ]] 00:24:32.771 14:26:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:32.771 14:26:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:24:33.031 [2024-11-18 14:26:24.868764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:33.031 [2024-11-18 14:26:24.868982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143892 ] 00:24:33.031 [2024-11-18 14:26:25.016137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.031 [2024-11-18 14:26:25.084551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.290  [2024-11-18T14:26:25.624Z] Copying: 512/512 [B] (average 250 kBps) 00:24:33.550 00:24:33.550 14:26:25 -- dd/posix.sh@93 -- # [[ v5jcytd9u0vq50bd03b8584j15jm7j52k21g6adynbyfpjhd1251dhl9t0udbkmpfr2ozcy1v55fayt4fkv592d1lqpdjxjvoug3hlc4atnvir0230a1vteuet9puxjqwr55x0mby9vl2qw4mllzo8eowbr1k33shpbxgr1lxafjioalr58roytfjd9enbkwjk2dnn5sntvl3jyfkdvxgz2t4vdc3gjusljq78metbrqtgod7tltzn2206251fn6y1nmmp4a71uw4y7w3rtltveok2p34t9mpt8ox3jixnkfaprj6ty87rgyk1z7b1amfmpw0wpet5ovddxpf7e1z89bcw5424hkmbr56lmh367a6v3air0wyfqddcz9i08vaby17rh7bxd9czmxrjqe7djln4img410v50n09v9vri6679i008gumtmabgnokd1qml9pmvxlpizn53l0g4fc8vgiq7zkh4onqlbi6mr3hjai9d5yle2ubbelb96sey5 == \v\5\j\c\y\t\d\9\u\0\v\q\5\0\b\d\0\3\b\8\5\8\4\j\1\5\j\m\7\j\5\2\k\2\1\g\6\a\d\y\n\b\y\f\p\j\h\d\1\2\5\1\d\h\l\9\t\0\u\d\b\k\m\p\f\r\2\o\z\c\y\1\v\5\5\f\a\y\t\4\f\k\v\5\9\2\d\1\l\q\p\d\j\x\j\v\o\u\g\3\h\l\c\4\a\t\n\v\i\r\0\2\3\0\a\1\v\t\e\u\e\t\9\p\u\x\j\q\w\r\5\5\x\0\m\b\y\9\v\l\2\q\w\4\m\l\l\z\o\8\e\o\w\b\r\1\k\3\3\s\h\p\b\x\g\r\1\l\x\a\f\j\i\o\a\l\r\5\8\r\o\y\t\f\j\d\9\e\n\b\k\w\j\k\2\d\n\n\5\s\n\t\v\l\3\j\y\f\k\d\v\x\g\z\2\t\4\v\d\c\3\g\j\u\s\l\j\q\7\8\m\e\t\b\r\q\t\g\o\d\7\t\l\t\z\n\2\2\0\6\2\5\1\f\n\6\y\1\n\m\m\p\4\a\7\1\u\w\4\y\7\w\3\r\t\l\t\v\e\o\k\2\p\3\4\t\9\m\p\t\8\o\x\3\j\i\x\n\k\f\a\p\r\j\6\t\y\8\7\r\g\y\k\1\z\7\b\1\a\m\f\m\p\w\0\w\p\e\t\5\o\v\d\d\x\p\f\7\e\1\z\8\9\b\c\w\5\4\2\4\h\k\m\b\r\5\6\l\m\h\3\6\7\a\6\v\3\a\i\r\0\w\y\f\q\d\d\c\z\9\i\0\8\v\a\b\y\1\7\r\h\7\b\x\d\9\c\z\m\x\r\j\q\e\7\d\j\l\n\4\i\m\g\4\1\0\v\5\0\n\0\9\v\9\v\r\i\6\6\7\9\i\0\0\8\g\u\m\t\m\a\b\g\n\o\k\d\1\q\m\l\9\p\m\v\x\l\p\i\z\n\5\3\l\0\g\4\f\c\8\v\g\i\q\7\z\k\h\4\o\n\q\l\b\i\6\m\r\3\h\j\a\i\9\d\5\y\l\e\2\u\b\b\e\l\b\9\6\s\e\y\5 ]] 00:24:33.550 14:26:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:33.550 14:26:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:24:33.550 [2024-11-18 14:26:25.501700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:33.550 [2024-11-18 14:26:25.501956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143904 ] 00:24:33.810 [2024-11-18 14:26:25.648556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.810 [2024-11-18 14:26:25.719550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.810  [2024-11-18T14:26:26.143Z] Copying: 512/512 [B] (average 166 kBps) 00:24:34.069 00:24:34.069 14:26:26 -- dd/posix.sh@93 -- # [[ v5jcytd9u0vq50bd03b8584j15jm7j52k21g6adynbyfpjhd1251dhl9t0udbkmpfr2ozcy1v55fayt4fkv592d1lqpdjxjvoug3hlc4atnvir0230a1vteuet9puxjqwr55x0mby9vl2qw4mllzo8eowbr1k33shpbxgr1lxafjioalr58roytfjd9enbkwjk2dnn5sntvl3jyfkdvxgz2t4vdc3gjusljq78metbrqtgod7tltzn2206251fn6y1nmmp4a71uw4y7w3rtltveok2p34t9mpt8ox3jixnkfaprj6ty87rgyk1z7b1amfmpw0wpet5ovddxpf7e1z89bcw5424hkmbr56lmh367a6v3air0wyfqddcz9i08vaby17rh7bxd9czmxrjqe7djln4img410v50n09v9vri6679i008gumtmabgnokd1qml9pmvxlpizn53l0g4fc8vgiq7zkh4onqlbi6mr3hjai9d5yle2ubbelb96sey5 == \v\5\j\c\y\t\d\9\u\0\v\q\5\0\b\d\0\3\b\8\5\8\4\j\1\5\j\m\7\j\5\2\k\2\1\g\6\a\d\y\n\b\y\f\p\j\h\d\1\2\5\1\d\h\l\9\t\0\u\d\b\k\m\p\f\r\2\o\z\c\y\1\v\5\5\f\a\y\t\4\f\k\v\5\9\2\d\1\l\q\p\d\j\x\j\v\o\u\g\3\h\l\c\4\a\t\n\v\i\r\0\2\3\0\a\1\v\t\e\u\e\t\9\p\u\x\j\q\w\r\5\5\x\0\m\b\y\9\v\l\2\q\w\4\m\l\l\z\o\8\e\o\w\b\r\1\k\3\3\s\h\p\b\x\g\r\1\l\x\a\f\j\i\o\a\l\r\5\8\r\o\y\t\f\j\d\9\e\n\b\k\w\j\k\2\d\n\n\5\s\n\t\v\l\3\j\y\f\k\d\v\x\g\z\2\t\4\v\d\c\3\g\j\u\s\l\j\q\7\8\m\e\t\b\r\q\t\g\o\d\7\t\l\t\z\n\2\2\0\6\2\5\1\f\n\6\y\1\n\m\m\p\4\a\7\1\u\w\4\y\7\w\3\r\t\l\t\v\e\o\k\2\p\3\4\t\9\m\p\t\8\o\x\3\j\i\x\n\k\f\a\p\r\j\6\t\y\8\7\r\g\y\k\1\z\7\b\1\a\m\f\m\p\w\0\w\p\e\t\5\o\v\d\d\x\p\f\7\e\1\z\8\9\b\c\w\5\4\2\4\h\k\m\b\r\5\6\l\m\h\3\6\7\a\6\v\3\a\i\r\0\w\y\f\q\d\d\c\z\9\i\0\8\v\a\b\y\1\7\r\h\7\b\x\d\9\c\z\m\x\r\j\q\e\7\d\j\l\n\4\i\m\g\4\1\0\v\5\0\n\0\9\v\9\v\r\i\6\6\7\9\i\0\0\8\g\u\m\t\m\a\b\g\n\o\k\d\1\q\m\l\9\p\m\v\x\l\p\i\z\n\5\3\l\0\g\4\f\c\8\v\g\i\q\7\z\k\h\4\o\n\q\l\b\i\6\m\r\3\h\j\a\i\9\d\5\y\l\e\2\u\b\b\e\l\b\9\6\s\e\y\5 ]] 00:24:34.069 00:24:34.069 real 0m5.000s 00:24:34.069 user 0m2.362s 00:24:34.069 sys 0m1.499s 00:24:34.069 14:26:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:34.069 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:24:34.069 ************************************ 00:24:34.069 END TEST dd_flags_misc 00:24:34.069 ************************************ 00:24:34.069 14:26:26 -- dd/posix.sh@131 -- # tests_forced_aio 00:24:34.069 14:26:26 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:24:34.069 * Second test run, using AIO 00:24:34.069 14:26:26 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:24:34.069 14:26:26 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:24:34.069 14:26:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:34.069 14:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:34.069 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:24:34.329 ************************************ 00:24:34.329 START TEST dd_flag_append_forced_aio 00:24:34.329 ************************************ 00:24:34.329 14:26:26 -- common/autotest_common.sh@1114 -- # append 00:24:34.329 14:26:26 -- dd/posix.sh@16 -- # local dump0 00:24:34.329 14:26:26 -- dd/posix.sh@17 -- # local dump1 00:24:34.329 14:26:26 -- dd/posix.sh@19 -- # gen_bytes 32 00:24:34.329 14:26:26 -- dd/common.sh@98 -- # xtrace_disable 00:24:34.329 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:24:34.329 14:26:26 -- dd/posix.sh@19 -- # dump0=lp3m7lry9agmzexr7jm86865bmz7fsk2 00:24:34.329 14:26:26 -- dd/posix.sh@20 -- # gen_bytes 32 00:24:34.329 14:26:26 -- dd/common.sh@98 -- # xtrace_disable 00:24:34.329 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:24:34.329 14:26:26 -- dd/posix.sh@20 -- # dump1=gtrrv3fq4dh6kqiaxpwcx9htd6byjveg 00:24:34.329 14:26:26 -- dd/posix.sh@22 -- # printf %s lp3m7lry9agmzexr7jm86865bmz7fsk2 00:24:34.329 14:26:26 -- dd/posix.sh@23 -- # printf %s gtrrv3fq4dh6kqiaxpwcx9htd6byjveg 00:24:34.329 14:26:26 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:24:34.329 [2024-11-18 14:26:26.210866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:34.329 [2024-11-18 14:26:26.211100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143935 ] 00:24:34.329 [2024-11-18 14:26:26.357629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.588 [2024-11-18 14:26:26.431998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.588  [2024-11-18T14:26:26.921Z] Copying: 32/32 [B] (average 31 kBps) 00:24:34.847 00:24:34.847 14:26:26 -- dd/posix.sh@27 -- # [[ gtrrv3fq4dh6kqiaxpwcx9htd6byjveglp3m7lry9agmzexr7jm86865bmz7fsk2 == \g\t\r\r\v\3\f\q\4\d\h\6\k\q\i\a\x\p\w\c\x\9\h\t\d\6\b\y\j\v\e\g\l\p\3\m\7\l\r\y\9\a\g\m\z\e\x\r\7\j\m\8\6\8\6\5\b\m\z\7\f\s\k\2 ]] 00:24:34.847 00:24:34.847 real 0m0.639s 00:24:34.847 user 0m0.282s 00:24:34.847 sys 0m0.218s 00:24:34.847 14:26:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:34.847 ************************************ 00:24:34.847 END TEST dd_flag_append_forced_aio 00:24:34.847 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:24:34.847 ************************************ 00:24:34.847 14:26:26 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:24:34.847 14:26:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:34.847 14:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:34.847 14:26:26 -- common/autotest_common.sh@10 -- # set +x 00:24:34.847 ************************************ 00:24:34.847 START TEST dd_flag_directory_forced_aio 00:24:34.847 ************************************ 00:24:34.847 14:26:26 -- common/autotest_common.sh@1114 -- # directory 00:24:34.847 14:26:26 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:34.847 14:26:26 -- common/autotest_common.sh@650 -- # local es=0 00:24:34.847 14:26:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:34.847 14:26:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:34.847 14:26:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.847 14:26:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:34.847 14:26:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.847 14:26:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:34.847 14:26:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.847 14:26:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:34.847 14:26:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:34.847 14:26:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:34.847 [2024-11-18 14:26:26.902641] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:34.847 [2024-11-18 14:26:26.902876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143969 ] 00:24:35.107 [2024-11-18 14:26:27.047361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.107 [2024-11-18 14:26:27.129840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.365 [2024-11-18 14:26:27.215394] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:35.365 [2024-11-18 14:26:27.215640] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:35.365 [2024-11-18 14:26:27.215707] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:35.365 [2024-11-18 14:26:27.391231] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:35.624 14:26:27 -- common/autotest_common.sh@653 -- # es=236 00:24:35.624 14:26:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:35.624 14:26:27 -- common/autotest_common.sh@662 -- # es=108 00:24:35.624 14:26:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:35.624 14:26:27 -- common/autotest_common.sh@670 -- # es=1 00:24:35.624 14:26:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:35.624 14:26:27 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:24:35.624 14:26:27 -- common/autotest_common.sh@650 -- # local es=0 00:24:35.624 14:26:27 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:24:35.624 14:26:27 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:35.624 14:26:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.624 14:26:27 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:35.624 14:26:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.624 14:26:27 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:35.624 14:26:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:35.624 14:26:27 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:35.624 14:26:27 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:35.624 14:26:27 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:24:35.624 [2024-11-18 14:26:27.576651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:35.625 [2024-11-18 14:26:27.576892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143992 ] 00:24:35.883 [2024-11-18 14:26:27.722240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.883 [2024-11-18 14:26:27.801809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.883 [2024-11-18 14:26:27.915225] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:35.883 [2024-11-18 14:26:27.915581] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:24:35.883 [2024-11-18 14:26:27.915674] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:36.142 [2024-11-18 14:26:28.080174] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:36.142 14:26:28 -- common/autotest_common.sh@653 -- # es=236 00:24:36.142 14:26:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:36.142 14:26:28 -- common/autotest_common.sh@662 -- # es=108 00:24:36.142 14:26:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:36.142 14:26:28 -- common/autotest_common.sh@670 -- # es=1 00:24:36.142 14:26:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:36.142 00:24:36.142 real 0m1.357s 00:24:36.142 user 0m0.722s 00:24:36.142 sys 0m0.433s 00:24:36.142 14:26:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:36.142 14:26:28 -- common/autotest_common.sh@10 -- # set +x 00:24:36.142 ************************************ 00:24:36.142 END TEST dd_flag_directory_forced_aio 00:24:36.142 ************************************ 00:24:36.401 14:26:28 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:24:36.401 14:26:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:36.401 14:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:36.401 14:26:28 -- common/autotest_common.sh@10 -- # set +x 00:24:36.401 ************************************ 00:24:36.401 START TEST dd_flag_nofollow_forced_aio 00:24:36.401 ************************************ 00:24:36.401 14:26:28 -- common/autotest_common.sh@1114 -- # nofollow 00:24:36.401 14:26:28 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:24:36.401 14:26:28 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:24:36.401 14:26:28 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:24:36.401 14:26:28 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:24:36.401 14:26:28 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:36.401 14:26:28 -- common/autotest_common.sh@650 -- # local es=0 00:24:36.401 14:26:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:36.401 14:26:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.401 14:26:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.401 14:26:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.401 14:26:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.401 14:26:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.401 14:26:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.401 14:26:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.401 14:26:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:36.401 14:26:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:36.401 [2024-11-18 14:26:28.327001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:36.401 [2024-11-18 14:26:28.327228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144018 ] 00:24:36.401 [2024-11-18 14:26:28.473357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.660 [2024-11-18 14:26:28.540245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.660 [2024-11-18 14:26:28.648969] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:24:36.660 [2024-11-18 14:26:28.649377] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:24:36.660 [2024-11-18 14:26:28.649454] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:36.918 [2024-11-18 14:26:28.814390] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:36.918 14:26:28 -- common/autotest_common.sh@653 -- # es=216 00:24:36.918 14:26:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:36.918 14:26:28 -- common/autotest_common.sh@662 -- # es=88 00:24:36.918 14:26:28 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:36.918 14:26:28 -- common/autotest_common.sh@670 -- # es=1 00:24:36.918 14:26:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:36.918 14:26:28 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:36.918 14:26:28 -- common/autotest_common.sh@650 -- # local es=0 00:24:36.918 14:26:28 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:36.918 14:26:28 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.918 14:26:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.918 14:26:28 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.918 14:26:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.918 14:26:28 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.918 14:26:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:36.918 14:26:28 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:36.918 14:26:28 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:36.918 14:26:28 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:24:37.181 [2024-11-18 14:26:28.993387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:37.181 [2024-11-18 14:26:28.993636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144039 ] 00:24:37.181 [2024-11-18 14:26:29.138572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.181 [2024-11-18 14:26:29.217756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.443 [2024-11-18 14:26:29.331497] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:24:37.443 [2024-11-18 14:26:29.331914] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:24:37.443 [2024-11-18 14:26:29.332003] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:37.443 [2024-11-18 14:26:29.496805] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:37.701 14:26:29 -- common/autotest_common.sh@653 -- # es=216 00:24:37.701 14:26:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:37.701 14:26:29 -- common/autotest_common.sh@662 -- # es=88 00:24:37.701 14:26:29 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:37.702 14:26:29 -- common/autotest_common.sh@670 -- # es=1 00:24:37.702 14:26:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:37.702 14:26:29 -- dd/posix.sh@46 -- # gen_bytes 512 00:24:37.702 14:26:29 -- dd/common.sh@98 -- # xtrace_disable 00:24:37.702 14:26:29 -- common/autotest_common.sh@10 -- # set +x 00:24:37.702 14:26:29 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:37.702 [2024-11-18 14:26:29.684740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:37.702 [2024-11-18 14:26:29.684977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144053 ] 00:24:37.960 [2024-11-18 14:26:29.830102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.960 [2024-11-18 14:26:29.898376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.960  [2024-11-18T14:26:30.602Z] Copying: 512/512 [B] (average 500 kBps) 00:24:38.528 00:24:38.528 14:26:30 -- dd/posix.sh@49 -- # [[ jd0ktgm3it18m1o6n1fz8gx2eyi3jiopufa11tku5zrnfwxxkdtia9exqccjlg3k68cmr8zp1d321rftg8utcfw8dbush9755ygherldi90476b1hsgtwgfeaeuvq0gdsc16skuqte897muwvvanuac5g60ng5erxpocfjm7tf90dovlzwpef34uzentf94on0dgad05c8ot6sth11tgfpsa6l6d08nappn81u135udakppqdc0p7xumtql51ntitjb7o2ptpfsgj3eddln1ar89tes6oc10fz0l1zhieh4ufdfna7hjpxucm4ltyp0df3ghti5q046k9l3zd44z7dnhmu0fjpihf6cl7xtwv3f874ygenw3yjxy3zlxjownfvyepn4hxmj4fs6fqs44xjhaa1k4bb3rac2fbn3qv4jkrm0z45hopx0pw142wg4veyq1tgxqvx9pgv6m2n0qr72k84zyhytxklm8gxg1blxg9fk7csbnx5ggu7rxvalj == \j\d\0\k\t\g\m\3\i\t\1\8\m\1\o\6\n\1\f\z\8\g\x\2\e\y\i\3\j\i\o\p\u\f\a\1\1\t\k\u\5\z\r\n\f\w\x\x\k\d\t\i\a\9\e\x\q\c\c\j\l\g\3\k\6\8\c\m\r\8\z\p\1\d\3\2\1\r\f\t\g\8\u\t\c\f\w\8\d\b\u\s\h\9\7\5\5\y\g\h\e\r\l\d\i\9\0\4\7\6\b\1\h\s\g\t\w\g\f\e\a\e\u\v\q\0\g\d\s\c\1\6\s\k\u\q\t\e\8\9\7\m\u\w\v\v\a\n\u\a\c\5\g\6\0\n\g\5\e\r\x\p\o\c\f\j\m\7\t\f\9\0\d\o\v\l\z\w\p\e\f\3\4\u\z\e\n\t\f\9\4\o\n\0\d\g\a\d\0\5\c\8\o\t\6\s\t\h\1\1\t\g\f\p\s\a\6\l\6\d\0\8\n\a\p\p\n\8\1\u\1\3\5\u\d\a\k\p\p\q\d\c\0\p\7\x\u\m\t\q\l\5\1\n\t\i\t\j\b\7\o\2\p\t\p\f\s\g\j\3\e\d\d\l\n\1\a\r\8\9\t\e\s\6\o\c\1\0\f\z\0\l\1\z\h\i\e\h\4\u\f\d\f\n\a\7\h\j\p\x\u\c\m\4\l\t\y\p\0\d\f\3\g\h\t\i\5\q\0\4\6\k\9\l\3\z\d\4\4\z\7\d\n\h\m\u\0\f\j\p\i\h\f\6\c\l\7\x\t\w\v\3\f\8\7\4\y\g\e\n\w\3\y\j\x\y\3\z\l\x\j\o\w\n\f\v\y\e\p\n\4\h\x\m\j\4\f\s\6\f\q\s\4\4\x\j\h\a\a\1\k\4\b\b\3\r\a\c\2\f\b\n\3\q\v\4\j\k\r\m\0\z\4\5\h\o\p\x\0\p\w\1\4\2\w\g\4\v\e\y\q\1\t\g\x\q\v\x\9\p\g\v\6\m\2\n\0\q\r\7\2\k\8\4\z\y\h\y\t\x\k\l\m\8\g\x\g\1\b\l\x\g\9\f\k\7\c\s\b\n\x\5\g\g\u\7\r\x\v\a\l\j ]] 00:24:38.528 00:24:38.528 real 0m2.108s 00:24:38.528 user 0m1.074s 00:24:38.528 sys 0m0.692s 00:24:38.528 14:26:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:38.528 14:26:30 -- common/autotest_common.sh@10 -- # set +x 00:24:38.528 ************************************ 00:24:38.528 END TEST dd_flag_nofollow_forced_aio 00:24:38.528 ************************************ 00:24:38.528 14:26:30 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:24:38.528 14:26:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:38.528 14:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:38.528 14:26:30 -- common/autotest_common.sh@10 -- # set +x 00:24:38.528 ************************************ 00:24:38.528 START TEST dd_flag_noatime_forced_aio 00:24:38.528 ************************************ 00:24:38.528 14:26:30 -- common/autotest_common.sh@1114 -- # noatime 00:24:38.528 14:26:30 -- dd/posix.sh@53 -- # local atime_if 00:24:38.528 14:26:30 -- dd/posix.sh@54 -- # local atime_of 00:24:38.528 14:26:30 -- dd/posix.sh@58 -- # gen_bytes 512 00:24:38.528 14:26:30 -- dd/common.sh@98 -- # xtrace_disable 00:24:38.528 14:26:30 -- common/autotest_common.sh@10 -- # set +x 00:24:38.528 14:26:30 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:38.528 14:26:30 -- dd/posix.sh@60 -- # atime_if=1731939990 00:24:38.528 14:26:30 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:38.528 14:26:30 -- dd/posix.sh@61 -- # atime_of=1731939990 00:24:38.528 14:26:30 -- dd/posix.sh@66 -- # sleep 1 00:24:39.465 14:26:31 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:39.465 [2024-11-18 14:26:31.511181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:39.465 [2024-11-18 14:26:31.511515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144105 ] 00:24:39.724 [2024-11-18 14:26:31.662824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.724 [2024-11-18 14:26:31.752044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.990  [2024-11-18T14:26:32.322Z] Copying: 512/512 [B] (average 500 kBps) 00:24:40.248 00:24:40.248 14:26:32 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:40.248 14:26:32 -- dd/posix.sh@69 -- # (( atime_if == 1731939990 )) 00:24:40.248 14:26:32 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:40.248 14:26:32 -- dd/posix.sh@70 -- # (( atime_of == 1731939990 )) 00:24:40.248 14:26:32 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:40.248 [2024-11-18 14:26:32.274811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:40.248 [2024-11-18 14:26:32.275014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144118 ] 00:24:40.507 [2024-11-18 14:26:32.413466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.507 [2024-11-18 14:26:32.490465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.766  [2024-11-18T14:26:33.098Z] Copying: 512/512 [B] (average 500 kBps) 00:24:41.024 00:24:41.024 14:26:32 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:41.024 14:26:32 -- dd/posix.sh@73 -- # (( atime_if < 1731939992 )) 00:24:41.024 00:24:41.024 real 0m2.529s 00:24:41.024 user 0m0.796s 00:24:41.024 sys 0m0.468s 00:24:41.024 14:26:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:41.024 ************************************ 00:24:41.024 14:26:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.024 END TEST dd_flag_noatime_forced_aio 00:24:41.024 ************************************ 00:24:41.024 14:26:32 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:24:41.024 14:26:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:41.024 14:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:41.024 14:26:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.024 ************************************ 00:24:41.024 START TEST dd_flags_misc_forced_aio 00:24:41.024 ************************************ 00:24:41.024 14:26:33 -- common/autotest_common.sh@1114 -- # io 00:24:41.024 14:26:33 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:24:41.024 14:26:33 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:24:41.024 14:26:33 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:24:41.024 14:26:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:24:41.024 14:26:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:24:41.024 14:26:33 -- dd/common.sh@98 -- # xtrace_disable 00:24:41.024 14:26:33 -- common/autotest_common.sh@10 -- # set +x 00:24:41.024 14:26:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:41.024 14:26:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:24:41.024 [2024-11-18 14:26:33.070025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:41.024 [2024-11-18 14:26:33.070227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144149 ] 00:24:41.282 [2024-11-18 14:26:33.209091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.282 [2024-11-18 14:26:33.289611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.540  [2024-11-18T14:26:33.873Z] Copying: 512/512 [B] (average 500 kBps) 00:24:41.799 00:24:41.799 14:26:33 -- dd/posix.sh@93 -- # [[ s2xnl90s21y3kp2hvztha3gappxu1en9pk89pogwmoj1zwiboqia7tyaxk57bmaxxxadjxwq6mxsfn7ibayprweqbjc60enb2gfg102buorbubturkhjpao0ywv1zoyrvth3dkhegyzmgpplhto0kgtgjwuxtvpiu09t0wmp66jcyw67xgtxc7agy0lm8yhfz420a4mw2ukc9qesdr8e9ezg94ijlrubk03z24wob8tdnyvfg6aln5cnus6i1ajrfxi6xuwmxb3pb6a8f30di5vxr66adhivsit1o7ngp6kqwzhaibn75zgtc1sjf19xtbwz81w3p1m6u546vylnahto7ykz1mouyoce3xbhg1es318ct3ci0egwq3d7abngwy5pj4kam7rfj6k2o7pv98wqinusia1bzklajdn0qs718p4qvb3jdr59e56957rgyzh7fmx9s9ymd8b1yj6xkkzdh7w7c2fffn4eka02xrznwdkkldktw1hexbovfhon == \s\2\x\n\l\9\0\s\2\1\y\3\k\p\2\h\v\z\t\h\a\3\g\a\p\p\x\u\1\e\n\9\p\k\8\9\p\o\g\w\m\o\j\1\z\w\i\b\o\q\i\a\7\t\y\a\x\k\5\7\b\m\a\x\x\x\a\d\j\x\w\q\6\m\x\s\f\n\7\i\b\a\y\p\r\w\e\q\b\j\c\6\0\e\n\b\2\g\f\g\1\0\2\b\u\o\r\b\u\b\t\u\r\k\h\j\p\a\o\0\y\w\v\1\z\o\y\r\v\t\h\3\d\k\h\e\g\y\z\m\g\p\p\l\h\t\o\0\k\g\t\g\j\w\u\x\t\v\p\i\u\0\9\t\0\w\m\p\6\6\j\c\y\w\6\7\x\g\t\x\c\7\a\g\y\0\l\m\8\y\h\f\z\4\2\0\a\4\m\w\2\u\k\c\9\q\e\s\d\r\8\e\9\e\z\g\9\4\i\j\l\r\u\b\k\0\3\z\2\4\w\o\b\8\t\d\n\y\v\f\g\6\a\l\n\5\c\n\u\s\6\i\1\a\j\r\f\x\i\6\x\u\w\m\x\b\3\p\b\6\a\8\f\3\0\d\i\5\v\x\r\6\6\a\d\h\i\v\s\i\t\1\o\7\n\g\p\6\k\q\w\z\h\a\i\b\n\7\5\z\g\t\c\1\s\j\f\1\9\x\t\b\w\z\8\1\w\3\p\1\m\6\u\5\4\6\v\y\l\n\a\h\t\o\7\y\k\z\1\m\o\u\y\o\c\e\3\x\b\h\g\1\e\s\3\1\8\c\t\3\c\i\0\e\g\w\q\3\d\7\a\b\n\g\w\y\5\p\j\4\k\a\m\7\r\f\j\6\k\2\o\7\p\v\9\8\w\q\i\n\u\s\i\a\1\b\z\k\l\a\j\d\n\0\q\s\7\1\8\p\4\q\v\b\3\j\d\r\5\9\e\5\6\9\5\7\r\g\y\z\h\7\f\m\x\9\s\9\y\m\d\8\b\1\y\j\6\x\k\k\z\d\h\7\w\7\c\2\f\f\f\n\4\e\k\a\0\2\x\r\z\n\w\d\k\k\l\d\k\t\w\1\h\e\x\b\o\v\f\h\o\n ]] 00:24:41.800 14:26:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:41.800 14:26:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:24:41.800 [2024-11-18 14:26:33.813778] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:41.800 [2024-11-18 14:26:33.814483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144170 ] 00:24:42.058 [2024-11-18 14:26:33.961546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.058 [2024-11-18 14:26:34.041235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.317  [2024-11-18T14:26:34.650Z] Copying: 512/512 [B] (average 500 kBps) 00:24:42.576 00:24:42.576 14:26:34 -- dd/posix.sh@93 -- # [[ s2xnl90s21y3kp2hvztha3gappxu1en9pk89pogwmoj1zwiboqia7tyaxk57bmaxxxadjxwq6mxsfn7ibayprweqbjc60enb2gfg102buorbubturkhjpao0ywv1zoyrvth3dkhegyzmgpplhto0kgtgjwuxtvpiu09t0wmp66jcyw67xgtxc7agy0lm8yhfz420a4mw2ukc9qesdr8e9ezg94ijlrubk03z24wob8tdnyvfg6aln5cnus6i1ajrfxi6xuwmxb3pb6a8f30di5vxr66adhivsit1o7ngp6kqwzhaibn75zgtc1sjf19xtbwz81w3p1m6u546vylnahto7ykz1mouyoce3xbhg1es318ct3ci0egwq3d7abngwy5pj4kam7rfj6k2o7pv98wqinusia1bzklajdn0qs718p4qvb3jdr59e56957rgyzh7fmx9s9ymd8b1yj6xkkzdh7w7c2fffn4eka02xrznwdkkldktw1hexbovfhon == \s\2\x\n\l\9\0\s\2\1\y\3\k\p\2\h\v\z\t\h\a\3\g\a\p\p\x\u\1\e\n\9\p\k\8\9\p\o\g\w\m\o\j\1\z\w\i\b\o\q\i\a\7\t\y\a\x\k\5\7\b\m\a\x\x\x\a\d\j\x\w\q\6\m\x\s\f\n\7\i\b\a\y\p\r\w\e\q\b\j\c\6\0\e\n\b\2\g\f\g\1\0\2\b\u\o\r\b\u\b\t\u\r\k\h\j\p\a\o\0\y\w\v\1\z\o\y\r\v\t\h\3\d\k\h\e\g\y\z\m\g\p\p\l\h\t\o\0\k\g\t\g\j\w\u\x\t\v\p\i\u\0\9\t\0\w\m\p\6\6\j\c\y\w\6\7\x\g\t\x\c\7\a\g\y\0\l\m\8\y\h\f\z\4\2\0\a\4\m\w\2\u\k\c\9\q\e\s\d\r\8\e\9\e\z\g\9\4\i\j\l\r\u\b\k\0\3\z\2\4\w\o\b\8\t\d\n\y\v\f\g\6\a\l\n\5\c\n\u\s\6\i\1\a\j\r\f\x\i\6\x\u\w\m\x\b\3\p\b\6\a\8\f\3\0\d\i\5\v\x\r\6\6\a\d\h\i\v\s\i\t\1\o\7\n\g\p\6\k\q\w\z\h\a\i\b\n\7\5\z\g\t\c\1\s\j\f\1\9\x\t\b\w\z\8\1\w\3\p\1\m\6\u\5\4\6\v\y\l\n\a\h\t\o\7\y\k\z\1\m\o\u\y\o\c\e\3\x\b\h\g\1\e\s\3\1\8\c\t\3\c\i\0\e\g\w\q\3\d\7\a\b\n\g\w\y\5\p\j\4\k\a\m\7\r\f\j\6\k\2\o\7\p\v\9\8\w\q\i\n\u\s\i\a\1\b\z\k\l\a\j\d\n\0\q\s\7\1\8\p\4\q\v\b\3\j\d\r\5\9\e\5\6\9\5\7\r\g\y\z\h\7\f\m\x\9\s\9\y\m\d\8\b\1\y\j\6\x\k\k\z\d\h\7\w\7\c\2\f\f\f\n\4\e\k\a\0\2\x\r\z\n\w\d\k\k\l\d\k\t\w\1\h\e\x\b\o\v\f\h\o\n ]] 00:24:42.576 14:26:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:42.576 14:26:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:24:42.576 [2024-11-18 14:26:34.558638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:42.576 [2024-11-18 14:26:34.558868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144186 ] 00:24:42.835 [2024-11-18 14:26:34.709143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.835 [2024-11-18 14:26:34.772059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.835  [2024-11-18T14:26:35.477Z] Copying: 512/512 [B] (average 166 kBps) 00:24:43.403 00:24:43.403 14:26:35 -- dd/posix.sh@93 -- # [[ s2xnl90s21y3kp2hvztha3gappxu1en9pk89pogwmoj1zwiboqia7tyaxk57bmaxxxadjxwq6mxsfn7ibayprweqbjc60enb2gfg102buorbubturkhjpao0ywv1zoyrvth3dkhegyzmgpplhto0kgtgjwuxtvpiu09t0wmp66jcyw67xgtxc7agy0lm8yhfz420a4mw2ukc9qesdr8e9ezg94ijlrubk03z24wob8tdnyvfg6aln5cnus6i1ajrfxi6xuwmxb3pb6a8f30di5vxr66adhivsit1o7ngp6kqwzhaibn75zgtc1sjf19xtbwz81w3p1m6u546vylnahto7ykz1mouyoce3xbhg1es318ct3ci0egwq3d7abngwy5pj4kam7rfj6k2o7pv98wqinusia1bzklajdn0qs718p4qvb3jdr59e56957rgyzh7fmx9s9ymd8b1yj6xkkzdh7w7c2fffn4eka02xrznwdkkldktw1hexbovfhon == \s\2\x\n\l\9\0\s\2\1\y\3\k\p\2\h\v\z\t\h\a\3\g\a\p\p\x\u\1\e\n\9\p\k\8\9\p\o\g\w\m\o\j\1\z\w\i\b\o\q\i\a\7\t\y\a\x\k\5\7\b\m\a\x\x\x\a\d\j\x\w\q\6\m\x\s\f\n\7\i\b\a\y\p\r\w\e\q\b\j\c\6\0\e\n\b\2\g\f\g\1\0\2\b\u\o\r\b\u\b\t\u\r\k\h\j\p\a\o\0\y\w\v\1\z\o\y\r\v\t\h\3\d\k\h\e\g\y\z\m\g\p\p\l\h\t\o\0\k\g\t\g\j\w\u\x\t\v\p\i\u\0\9\t\0\w\m\p\6\6\j\c\y\w\6\7\x\g\t\x\c\7\a\g\y\0\l\m\8\y\h\f\z\4\2\0\a\4\m\w\2\u\k\c\9\q\e\s\d\r\8\e\9\e\z\g\9\4\i\j\l\r\u\b\k\0\3\z\2\4\w\o\b\8\t\d\n\y\v\f\g\6\a\l\n\5\c\n\u\s\6\i\1\a\j\r\f\x\i\6\x\u\w\m\x\b\3\p\b\6\a\8\f\3\0\d\i\5\v\x\r\6\6\a\d\h\i\v\s\i\t\1\o\7\n\g\p\6\k\q\w\z\h\a\i\b\n\7\5\z\g\t\c\1\s\j\f\1\9\x\t\b\w\z\8\1\w\3\p\1\m\6\u\5\4\6\v\y\l\n\a\h\t\o\7\y\k\z\1\m\o\u\y\o\c\e\3\x\b\h\g\1\e\s\3\1\8\c\t\3\c\i\0\e\g\w\q\3\d\7\a\b\n\g\w\y\5\p\j\4\k\a\m\7\r\f\j\6\k\2\o\7\p\v\9\8\w\q\i\n\u\s\i\a\1\b\z\k\l\a\j\d\n\0\q\s\7\1\8\p\4\q\v\b\3\j\d\r\5\9\e\5\6\9\5\7\r\g\y\z\h\7\f\m\x\9\s\9\y\m\d\8\b\1\y\j\6\x\k\k\z\d\h\7\w\7\c\2\f\f\f\n\4\e\k\a\0\2\x\r\z\n\w\d\k\k\l\d\k\t\w\1\h\e\x\b\o\v\f\h\o\n ]] 00:24:43.403 14:26:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:43.403 14:26:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:24:43.403 [2024-11-18 14:26:35.288802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:43.403 [2024-11-18 14:26:35.289029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144192 ] 00:24:43.403 [2024-11-18 14:26:35.434266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.662 [2024-11-18 14:26:35.514605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.662  [2024-11-18T14:26:35.995Z] Copying: 512/512 [B] (average 250 kBps) 00:24:43.921 00:24:43.921 14:26:35 -- dd/posix.sh@93 -- # [[ s2xnl90s21y3kp2hvztha3gappxu1en9pk89pogwmoj1zwiboqia7tyaxk57bmaxxxadjxwq6mxsfn7ibayprweqbjc60enb2gfg102buorbubturkhjpao0ywv1zoyrvth3dkhegyzmgpplhto0kgtgjwuxtvpiu09t0wmp66jcyw67xgtxc7agy0lm8yhfz420a4mw2ukc9qesdr8e9ezg94ijlrubk03z24wob8tdnyvfg6aln5cnus6i1ajrfxi6xuwmxb3pb6a8f30di5vxr66adhivsit1o7ngp6kqwzhaibn75zgtc1sjf19xtbwz81w3p1m6u546vylnahto7ykz1mouyoce3xbhg1es318ct3ci0egwq3d7abngwy5pj4kam7rfj6k2o7pv98wqinusia1bzklajdn0qs718p4qvb3jdr59e56957rgyzh7fmx9s9ymd8b1yj6xkkzdh7w7c2fffn4eka02xrznwdkkldktw1hexbovfhon == \s\2\x\n\l\9\0\s\2\1\y\3\k\p\2\h\v\z\t\h\a\3\g\a\p\p\x\u\1\e\n\9\p\k\8\9\p\o\g\w\m\o\j\1\z\w\i\b\o\q\i\a\7\t\y\a\x\k\5\7\b\m\a\x\x\x\a\d\j\x\w\q\6\m\x\s\f\n\7\i\b\a\y\p\r\w\e\q\b\j\c\6\0\e\n\b\2\g\f\g\1\0\2\b\u\o\r\b\u\b\t\u\r\k\h\j\p\a\o\0\y\w\v\1\z\o\y\r\v\t\h\3\d\k\h\e\g\y\z\m\g\p\p\l\h\t\o\0\k\g\t\g\j\w\u\x\t\v\p\i\u\0\9\t\0\w\m\p\6\6\j\c\y\w\6\7\x\g\t\x\c\7\a\g\y\0\l\m\8\y\h\f\z\4\2\0\a\4\m\w\2\u\k\c\9\q\e\s\d\r\8\e\9\e\z\g\9\4\i\j\l\r\u\b\k\0\3\z\2\4\w\o\b\8\t\d\n\y\v\f\g\6\a\l\n\5\c\n\u\s\6\i\1\a\j\r\f\x\i\6\x\u\w\m\x\b\3\p\b\6\a\8\f\3\0\d\i\5\v\x\r\6\6\a\d\h\i\v\s\i\t\1\o\7\n\g\p\6\k\q\w\z\h\a\i\b\n\7\5\z\g\t\c\1\s\j\f\1\9\x\t\b\w\z\8\1\w\3\p\1\m\6\u\5\4\6\v\y\l\n\a\h\t\o\7\y\k\z\1\m\o\u\y\o\c\e\3\x\b\h\g\1\e\s\3\1\8\c\t\3\c\i\0\e\g\w\q\3\d\7\a\b\n\g\w\y\5\p\j\4\k\a\m\7\r\f\j\6\k\2\o\7\p\v\9\8\w\q\i\n\u\s\i\a\1\b\z\k\l\a\j\d\n\0\q\s\7\1\8\p\4\q\v\b\3\j\d\r\5\9\e\5\6\9\5\7\r\g\y\z\h\7\f\m\x\9\s\9\y\m\d\8\b\1\y\j\6\x\k\k\z\d\h\7\w\7\c\2\f\f\f\n\4\e\k\a\0\2\x\r\z\n\w\d\k\k\l\d\k\t\w\1\h\e\x\b\o\v\f\h\o\n ]] 00:24:43.921 14:26:35 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:24:43.921 14:26:35 -- dd/posix.sh@86 -- # gen_bytes 512 00:24:43.921 14:26:35 -- dd/common.sh@98 -- # xtrace_disable 00:24:43.921 14:26:35 -- common/autotest_common.sh@10 -- # set +x 00:24:44.180 14:26:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:44.180 14:26:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:24:44.180 [2024-11-18 14:26:36.045819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:44.180 [2024-11-18 14:26:36.046054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144209 ] 00:24:44.180 [2024-11-18 14:26:36.184188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.439 [2024-11-18 14:26:36.259106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.439  [2024-11-18T14:26:36.772Z] Copying: 512/512 [B] (average 500 kBps) 00:24:44.698 00:24:44.698 14:26:36 -- dd/posix.sh@93 -- # [[ 3l0267pyfflsbf2187zjc372a7tx4sl2rg0f06p8pwym9qt7lli908gyu7540wvajsr0s59w3tx49evq7hpkx1xlfoehxm282cxuw8kr2l2vuj1qo9oyixxpxv3novpfkdtq23ez3en2otwi3fdg4ic5az3s1ypuz69lc4qwiajhmetvzvkfd8a39sjha8aj4rvemp0pebpkwi5wmpo1i9qz1nqelesfm0qyqinphq1qppu4gmu4ddbsr282z77e7ouopqlgns2f599u4xg0tqh8uy8j3ue3cv3h46j7b18dgudbpzdvttuqx49nd5nwli9rcdxn5re5ox6jhmijl20nmsixsumwbziqjq1oms3yc406k3bcva5nqijtvcg5a1orzkyrd7odom6ldpmmam4q3lzgpiwozo595v7oc6fx34rn10g7ew80j4qp82rdw01qwm6wnwtn4xc5kownboxy8pax41gixs2zykjef8ltdenh6vfdicxhzw9gy4r0 == \3\l\0\2\6\7\p\y\f\f\l\s\b\f\2\1\8\7\z\j\c\3\7\2\a\7\t\x\4\s\l\2\r\g\0\f\0\6\p\8\p\w\y\m\9\q\t\7\l\l\i\9\0\8\g\y\u\7\5\4\0\w\v\a\j\s\r\0\s\5\9\w\3\t\x\4\9\e\v\q\7\h\p\k\x\1\x\l\f\o\e\h\x\m\2\8\2\c\x\u\w\8\k\r\2\l\2\v\u\j\1\q\o\9\o\y\i\x\x\p\x\v\3\n\o\v\p\f\k\d\t\q\2\3\e\z\3\e\n\2\o\t\w\i\3\f\d\g\4\i\c\5\a\z\3\s\1\y\p\u\z\6\9\l\c\4\q\w\i\a\j\h\m\e\t\v\z\v\k\f\d\8\a\3\9\s\j\h\a\8\a\j\4\r\v\e\m\p\0\p\e\b\p\k\w\i\5\w\m\p\o\1\i\9\q\z\1\n\q\e\l\e\s\f\m\0\q\y\q\i\n\p\h\q\1\q\p\p\u\4\g\m\u\4\d\d\b\s\r\2\8\2\z\7\7\e\7\o\u\o\p\q\l\g\n\s\2\f\5\9\9\u\4\x\g\0\t\q\h\8\u\y\8\j\3\u\e\3\c\v\3\h\4\6\j\7\b\1\8\d\g\u\d\b\p\z\d\v\t\t\u\q\x\4\9\n\d\5\n\w\l\i\9\r\c\d\x\n\5\r\e\5\o\x\6\j\h\m\i\j\l\2\0\n\m\s\i\x\s\u\m\w\b\z\i\q\j\q\1\o\m\s\3\y\c\4\0\6\k\3\b\c\v\a\5\n\q\i\j\t\v\c\g\5\a\1\o\r\z\k\y\r\d\7\o\d\o\m\6\l\d\p\m\m\a\m\4\q\3\l\z\g\p\i\w\o\z\o\5\9\5\v\7\o\c\6\f\x\3\4\r\n\1\0\g\7\e\w\8\0\j\4\q\p\8\2\r\d\w\0\1\q\w\m\6\w\n\w\t\n\4\x\c\5\k\o\w\n\b\o\x\y\8\p\a\x\4\1\g\i\x\s\2\z\y\k\j\e\f\8\l\t\d\e\n\h\6\v\f\d\i\c\x\h\z\w\9\g\y\4\r\0 ]] 00:24:44.698 14:26:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:44.698 14:26:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:24:44.957 [2024-11-18 14:26:36.799548] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:44.957 [2024-11-18 14:26:36.799784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144221 ] 00:24:44.957 [2024-11-18 14:26:36.938349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.957 [2024-11-18 14:26:37.014203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.216  [2024-11-18T14:26:37.549Z] Copying: 512/512 [B] (average 500 kBps) 00:24:45.475 00:24:45.475 14:26:37 -- dd/posix.sh@93 -- # [[ 3l0267pyfflsbf2187zjc372a7tx4sl2rg0f06p8pwym9qt7lli908gyu7540wvajsr0s59w3tx49evq7hpkx1xlfoehxm282cxuw8kr2l2vuj1qo9oyixxpxv3novpfkdtq23ez3en2otwi3fdg4ic5az3s1ypuz69lc4qwiajhmetvzvkfd8a39sjha8aj4rvemp0pebpkwi5wmpo1i9qz1nqelesfm0qyqinphq1qppu4gmu4ddbsr282z77e7ouopqlgns2f599u4xg0tqh8uy8j3ue3cv3h46j7b18dgudbpzdvttuqx49nd5nwli9rcdxn5re5ox6jhmijl20nmsixsumwbziqjq1oms3yc406k3bcva5nqijtvcg5a1orzkyrd7odom6ldpmmam4q3lzgpiwozo595v7oc6fx34rn10g7ew80j4qp82rdw01qwm6wnwtn4xc5kownboxy8pax41gixs2zykjef8ltdenh6vfdicxhzw9gy4r0 == \3\l\0\2\6\7\p\y\f\f\l\s\b\f\2\1\8\7\z\j\c\3\7\2\a\7\t\x\4\s\l\2\r\g\0\f\0\6\p\8\p\w\y\m\9\q\t\7\l\l\i\9\0\8\g\y\u\7\5\4\0\w\v\a\j\s\r\0\s\5\9\w\3\t\x\4\9\e\v\q\7\h\p\k\x\1\x\l\f\o\e\h\x\m\2\8\2\c\x\u\w\8\k\r\2\l\2\v\u\j\1\q\o\9\o\y\i\x\x\p\x\v\3\n\o\v\p\f\k\d\t\q\2\3\e\z\3\e\n\2\o\t\w\i\3\f\d\g\4\i\c\5\a\z\3\s\1\y\p\u\z\6\9\l\c\4\q\w\i\a\j\h\m\e\t\v\z\v\k\f\d\8\a\3\9\s\j\h\a\8\a\j\4\r\v\e\m\p\0\p\e\b\p\k\w\i\5\w\m\p\o\1\i\9\q\z\1\n\q\e\l\e\s\f\m\0\q\y\q\i\n\p\h\q\1\q\p\p\u\4\g\m\u\4\d\d\b\s\r\2\8\2\z\7\7\e\7\o\u\o\p\q\l\g\n\s\2\f\5\9\9\u\4\x\g\0\t\q\h\8\u\y\8\j\3\u\e\3\c\v\3\h\4\6\j\7\b\1\8\d\g\u\d\b\p\z\d\v\t\t\u\q\x\4\9\n\d\5\n\w\l\i\9\r\c\d\x\n\5\r\e\5\o\x\6\j\h\m\i\j\l\2\0\n\m\s\i\x\s\u\m\w\b\z\i\q\j\q\1\o\m\s\3\y\c\4\0\6\k\3\b\c\v\a\5\n\q\i\j\t\v\c\g\5\a\1\o\r\z\k\y\r\d\7\o\d\o\m\6\l\d\p\m\m\a\m\4\q\3\l\z\g\p\i\w\o\z\o\5\9\5\v\7\o\c\6\f\x\3\4\r\n\1\0\g\7\e\w\8\0\j\4\q\p\8\2\r\d\w\0\1\q\w\m\6\w\n\w\t\n\4\x\c\5\k\o\w\n\b\o\x\y\8\p\a\x\4\1\g\i\x\s\2\z\y\k\j\e\f\8\l\t\d\e\n\h\6\v\f\d\i\c\x\h\z\w\9\g\y\4\r\0 ]] 00:24:45.475 14:26:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:45.475 14:26:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:24:45.475 [2024-11-18 14:26:37.543010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:45.475 [2024-11-18 14:26:37.543264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144234 ] 00:24:45.733 [2024-11-18 14:26:37.689243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.733 [2024-11-18 14:26:37.768348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.992  [2024-11-18T14:26:38.325Z] Copying: 512/512 [B] (average 100 kBps) 00:24:46.251 00:24:46.252 14:26:38 -- dd/posix.sh@93 -- # [[ 3l0267pyfflsbf2187zjc372a7tx4sl2rg0f06p8pwym9qt7lli908gyu7540wvajsr0s59w3tx49evq7hpkx1xlfoehxm282cxuw8kr2l2vuj1qo9oyixxpxv3novpfkdtq23ez3en2otwi3fdg4ic5az3s1ypuz69lc4qwiajhmetvzvkfd8a39sjha8aj4rvemp0pebpkwi5wmpo1i9qz1nqelesfm0qyqinphq1qppu4gmu4ddbsr282z77e7ouopqlgns2f599u4xg0tqh8uy8j3ue3cv3h46j7b18dgudbpzdvttuqx49nd5nwli9rcdxn5re5ox6jhmijl20nmsixsumwbziqjq1oms3yc406k3bcva5nqijtvcg5a1orzkyrd7odom6ldpmmam4q3lzgpiwozo595v7oc6fx34rn10g7ew80j4qp82rdw01qwm6wnwtn4xc5kownboxy8pax41gixs2zykjef8ltdenh6vfdicxhzw9gy4r0 == \3\l\0\2\6\7\p\y\f\f\l\s\b\f\2\1\8\7\z\j\c\3\7\2\a\7\t\x\4\s\l\2\r\g\0\f\0\6\p\8\p\w\y\m\9\q\t\7\l\l\i\9\0\8\g\y\u\7\5\4\0\w\v\a\j\s\r\0\s\5\9\w\3\t\x\4\9\e\v\q\7\h\p\k\x\1\x\l\f\o\e\h\x\m\2\8\2\c\x\u\w\8\k\r\2\l\2\v\u\j\1\q\o\9\o\y\i\x\x\p\x\v\3\n\o\v\p\f\k\d\t\q\2\3\e\z\3\e\n\2\o\t\w\i\3\f\d\g\4\i\c\5\a\z\3\s\1\y\p\u\z\6\9\l\c\4\q\w\i\a\j\h\m\e\t\v\z\v\k\f\d\8\a\3\9\s\j\h\a\8\a\j\4\r\v\e\m\p\0\p\e\b\p\k\w\i\5\w\m\p\o\1\i\9\q\z\1\n\q\e\l\e\s\f\m\0\q\y\q\i\n\p\h\q\1\q\p\p\u\4\g\m\u\4\d\d\b\s\r\2\8\2\z\7\7\e\7\o\u\o\p\q\l\g\n\s\2\f\5\9\9\u\4\x\g\0\t\q\h\8\u\y\8\j\3\u\e\3\c\v\3\h\4\6\j\7\b\1\8\d\g\u\d\b\p\z\d\v\t\t\u\q\x\4\9\n\d\5\n\w\l\i\9\r\c\d\x\n\5\r\e\5\o\x\6\j\h\m\i\j\l\2\0\n\m\s\i\x\s\u\m\w\b\z\i\q\j\q\1\o\m\s\3\y\c\4\0\6\k\3\b\c\v\a\5\n\q\i\j\t\v\c\g\5\a\1\o\r\z\k\y\r\d\7\o\d\o\m\6\l\d\p\m\m\a\m\4\q\3\l\z\g\p\i\w\o\z\o\5\9\5\v\7\o\c\6\f\x\3\4\r\n\1\0\g\7\e\w\8\0\j\4\q\p\8\2\r\d\w\0\1\q\w\m\6\w\n\w\t\n\4\x\c\5\k\o\w\n\b\o\x\y\8\p\a\x\4\1\g\i\x\s\2\z\y\k\j\e\f\8\l\t\d\e\n\h\6\v\f\d\i\c\x\h\z\w\9\g\y\4\r\0 ]] 00:24:46.252 14:26:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:24:46.252 14:26:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:24:46.252 [2024-11-18 14:26:38.291375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:46.252 [2024-11-18 14:26:38.291638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144250 ] 00:24:46.510 [2024-11-18 14:26:38.437237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.510 [2024-11-18 14:26:38.502364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.769  [2024-11-18T14:26:39.101Z] Copying: 512/512 [B] (average 250 kBps) 00:24:47.027 00:24:47.027 14:26:38 -- dd/posix.sh@93 -- # [[ 3l0267pyfflsbf2187zjc372a7tx4sl2rg0f06p8pwym9qt7lli908gyu7540wvajsr0s59w3tx49evq7hpkx1xlfoehxm282cxuw8kr2l2vuj1qo9oyixxpxv3novpfkdtq23ez3en2otwi3fdg4ic5az3s1ypuz69lc4qwiajhmetvzvkfd8a39sjha8aj4rvemp0pebpkwi5wmpo1i9qz1nqelesfm0qyqinphq1qppu4gmu4ddbsr282z77e7ouopqlgns2f599u4xg0tqh8uy8j3ue3cv3h46j7b18dgudbpzdvttuqx49nd5nwli9rcdxn5re5ox6jhmijl20nmsixsumwbziqjq1oms3yc406k3bcva5nqijtvcg5a1orzkyrd7odom6ldpmmam4q3lzgpiwozo595v7oc6fx34rn10g7ew80j4qp82rdw01qwm6wnwtn4xc5kownboxy8pax41gixs2zykjef8ltdenh6vfdicxhzw9gy4r0 == \3\l\0\2\6\7\p\y\f\f\l\s\b\f\2\1\8\7\z\j\c\3\7\2\a\7\t\x\4\s\l\2\r\g\0\f\0\6\p\8\p\w\y\m\9\q\t\7\l\l\i\9\0\8\g\y\u\7\5\4\0\w\v\a\j\s\r\0\s\5\9\w\3\t\x\4\9\e\v\q\7\h\p\k\x\1\x\l\f\o\e\h\x\m\2\8\2\c\x\u\w\8\k\r\2\l\2\v\u\j\1\q\o\9\o\y\i\x\x\p\x\v\3\n\o\v\p\f\k\d\t\q\2\3\e\z\3\e\n\2\o\t\w\i\3\f\d\g\4\i\c\5\a\z\3\s\1\y\p\u\z\6\9\l\c\4\q\w\i\a\j\h\m\e\t\v\z\v\k\f\d\8\a\3\9\s\j\h\a\8\a\j\4\r\v\e\m\p\0\p\e\b\p\k\w\i\5\w\m\p\o\1\i\9\q\z\1\n\q\e\l\e\s\f\m\0\q\y\q\i\n\p\h\q\1\q\p\p\u\4\g\m\u\4\d\d\b\s\r\2\8\2\z\7\7\e\7\o\u\o\p\q\l\g\n\s\2\f\5\9\9\u\4\x\g\0\t\q\h\8\u\y\8\j\3\u\e\3\c\v\3\h\4\6\j\7\b\1\8\d\g\u\d\b\p\z\d\v\t\t\u\q\x\4\9\n\d\5\n\w\l\i\9\r\c\d\x\n\5\r\e\5\o\x\6\j\h\m\i\j\l\2\0\n\m\s\i\x\s\u\m\w\b\z\i\q\j\q\1\o\m\s\3\y\c\4\0\6\k\3\b\c\v\a\5\n\q\i\j\t\v\c\g\5\a\1\o\r\z\k\y\r\d\7\o\d\o\m\6\l\d\p\m\m\a\m\4\q\3\l\z\g\p\i\w\o\z\o\5\9\5\v\7\o\c\6\f\x\3\4\r\n\1\0\g\7\e\w\8\0\j\4\q\p\8\2\r\d\w\0\1\q\w\m\6\w\n\w\t\n\4\x\c\5\k\o\w\n\b\o\x\y\8\p\a\x\4\1\g\i\x\s\2\z\y\k\j\e\f\8\l\t\d\e\n\h\6\v\f\d\i\c\x\h\z\w\9\g\y\4\r\0 ]] 00:24:47.027 00:24:47.027 real 0m5.952s 00:24:47.027 user 0m3.025s 00:24:47.027 sys 0m1.827s 00:24:47.027 14:26:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:47.027 14:26:38 -- common/autotest_common.sh@10 -- # set +x 00:24:47.027 ************************************ 00:24:47.027 END TEST dd_flags_misc_forced_aio 00:24:47.027 ************************************ 00:24:47.027 14:26:39 -- dd/posix.sh@1 -- # cleanup 00:24:47.027 14:26:39 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:24:47.027 14:26:39 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:24:47.027 00:24:47.027 real 0m24.181s 00:24:47.027 user 0m11.016s 00:24:47.027 sys 0m6.960s 00:24:47.027 14:26:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:47.027 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:24:47.027 ************************************ 00:24:47.027 END TEST spdk_dd_posix 00:24:47.027 ************************************ 00:24:47.027 14:26:39 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:24:47.027 14:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:47.027 14:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:47.027 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:24:47.027 ************************************ 00:24:47.027 START TEST spdk_dd_malloc 00:24:47.027 ************************************ 00:24:47.027 14:26:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:24:47.286 * Looking for test storage... 00:24:47.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:47.286 14:26:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:47.286 14:26:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:47.286 14:26:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:47.286 14:26:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:47.286 14:26:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:47.286 14:26:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:47.286 14:26:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:47.286 14:26:39 -- scripts/common.sh@335 -- # IFS=.-: 00:24:47.286 14:26:39 -- scripts/common.sh@335 -- # read -ra ver1 00:24:47.286 14:26:39 -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.286 14:26:39 -- scripts/common.sh@336 -- # read -ra ver2 00:24:47.286 14:26:39 -- scripts/common.sh@337 -- # local 'op=<' 00:24:47.286 14:26:39 -- scripts/common.sh@339 -- # ver1_l=2 00:24:47.286 14:26:39 -- scripts/common.sh@340 -- # ver2_l=1 00:24:47.286 14:26:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:47.286 14:26:39 -- scripts/common.sh@343 -- # case "$op" in 00:24:47.286 14:26:39 -- scripts/common.sh@344 -- # : 1 00:24:47.286 14:26:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:47.286 14:26:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.286 14:26:39 -- scripts/common.sh@364 -- # decimal 1 00:24:47.286 14:26:39 -- scripts/common.sh@352 -- # local d=1 00:24:47.286 14:26:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.286 14:26:39 -- scripts/common.sh@354 -- # echo 1 00:24:47.286 14:26:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:47.286 14:26:39 -- scripts/common.sh@365 -- # decimal 2 00:24:47.286 14:26:39 -- scripts/common.sh@352 -- # local d=2 00:24:47.286 14:26:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.286 14:26:39 -- scripts/common.sh@354 -- # echo 2 00:24:47.286 14:26:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:47.286 14:26:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:47.286 14:26:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:47.286 14:26:39 -- scripts/common.sh@367 -- # return 0 00:24:47.286 14:26:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.286 14:26:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.286 --rc genhtml_branch_coverage=1 00:24:47.286 --rc genhtml_function_coverage=1 00:24:47.286 --rc genhtml_legend=1 00:24:47.286 --rc geninfo_all_blocks=1 00:24:47.286 --rc geninfo_unexecuted_blocks=1 00:24:47.286 00:24:47.286 ' 00:24:47.286 14:26:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.286 --rc genhtml_branch_coverage=1 00:24:47.286 --rc genhtml_function_coverage=1 00:24:47.286 --rc genhtml_legend=1 00:24:47.286 --rc geninfo_all_blocks=1 00:24:47.286 --rc geninfo_unexecuted_blocks=1 00:24:47.286 00:24:47.286 ' 00:24:47.286 14:26:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.286 --rc genhtml_branch_coverage=1 00:24:47.286 --rc genhtml_function_coverage=1 00:24:47.286 --rc genhtml_legend=1 00:24:47.286 --rc geninfo_all_blocks=1 00:24:47.286 --rc geninfo_unexecuted_blocks=1 00:24:47.286 00:24:47.286 ' 00:24:47.286 14:26:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.286 --rc genhtml_branch_coverage=1 00:24:47.286 --rc genhtml_function_coverage=1 00:24:47.286 --rc genhtml_legend=1 00:24:47.286 --rc geninfo_all_blocks=1 00:24:47.286 --rc geninfo_unexecuted_blocks=1 00:24:47.286 00:24:47.286 ' 00:24:47.286 14:26:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:47.286 14:26:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.286 14:26:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.286 14:26:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.286 14:26:39 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:47.286 14:26:39 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:47.286 14:26:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:47.286 14:26:39 -- paths/export.sh@5 -- # export PATH 00:24:47.286 14:26:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:47.286 14:26:39 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:24:47.286 14:26:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:47.286 14:26:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:47.286 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:24:47.286 ************************************ 00:24:47.286 START TEST dd_malloc_copy 00:24:47.286 ************************************ 00:24:47.286 14:26:39 -- common/autotest_common.sh@1114 -- # malloc_copy 00:24:47.286 14:26:39 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:24:47.286 14:26:39 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:24:47.286 14:26:39 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:24:47.286 14:26:39 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:24:47.286 14:26:39 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:24:47.286 14:26:39 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:24:47.286 14:26:39 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:24:47.286 14:26:39 -- dd/malloc.sh@28 -- # gen_conf 00:24:47.286 14:26:39 -- dd/common.sh@31 -- # xtrace_disable 00:24:47.286 14:26:39 -- common/autotest_common.sh@10 -- # set +x 00:24:47.286 [2024-11-18 14:26:39.309208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:47.286 [2024-11-18 14:26:39.309435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144336 ] 00:24:47.286 { 00:24:47.286 "subsystems": [ 00:24:47.286 { 00:24:47.286 "subsystem": "bdev", 00:24:47.286 "config": [ 00:24:47.286 { 00:24:47.286 "params": { 00:24:47.286 "block_size": 512, 00:24:47.286 "num_blocks": 1048576, 00:24:47.286 "name": "malloc0" 00:24:47.286 }, 00:24:47.286 "method": "bdev_malloc_create" 00:24:47.286 }, 00:24:47.286 { 00:24:47.286 "params": { 00:24:47.286 "block_size": 512, 00:24:47.286 "num_blocks": 1048576, 00:24:47.286 "name": "malloc1" 00:24:47.286 }, 00:24:47.286 "method": "bdev_malloc_create" 00:24:47.286 }, 00:24:47.286 { 00:24:47.286 "method": "bdev_wait_for_examine" 00:24:47.286 } 00:24:47.286 ] 00:24:47.286 } 00:24:47.286 ] 00:24:47.286 } 00:24:47.545 [2024-11-18 14:26:39.453842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.545 [2024-11-18 14:26:39.523685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.446  [2024-11-18T14:26:42.087Z] Copying: 209/512 [MB] (209 MBps) [2024-11-18T14:26:42.654Z] Copying: 419/512 [MB] (209 MBps) [2024-11-18T14:26:43.591Z] Copying: 512/512 [MB] (average 209 MBps) 00:24:51.517 00:24:51.517 14:26:43 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:24:51.517 14:26:43 -- dd/malloc.sh@33 -- # gen_conf 00:24:51.517 14:26:43 -- dd/common.sh@31 -- # xtrace_disable 00:24:51.517 14:26:43 -- common/autotest_common.sh@10 -- # set +x 00:24:51.517 [2024-11-18 14:26:43.390229] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:51.517 [2024-11-18 14:26:43.390491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144398 ] 00:24:51.517 { 00:24:51.517 "subsystems": [ 00:24:51.517 { 00:24:51.517 "subsystem": "bdev", 00:24:51.517 "config": [ 00:24:51.517 { 00:24:51.517 "params": { 00:24:51.517 "block_size": 512, 00:24:51.517 "num_blocks": 1048576, 00:24:51.517 "name": "malloc0" 00:24:51.517 }, 00:24:51.517 "method": "bdev_malloc_create" 00:24:51.517 }, 00:24:51.517 { 00:24:51.517 "params": { 00:24:51.517 "block_size": 512, 00:24:51.517 "num_blocks": 1048576, 00:24:51.517 "name": "malloc1" 00:24:51.517 }, 00:24:51.517 "method": "bdev_malloc_create" 00:24:51.517 }, 00:24:51.517 { 00:24:51.517 "method": "bdev_wait_for_examine" 00:24:51.517 } 00:24:51.517 ] 00:24:51.517 } 00:24:51.517 ] 00:24:51.517 } 00:24:51.517 [2024-11-18 14:26:43.537796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.776 [2024-11-18 14:26:43.616117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.171  [2024-11-18T14:26:46.204Z] Copying: 209/512 [MB] (209 MBps) [2024-11-18T14:26:46.782Z] Copying: 419/512 [MB] (209 MBps) [2024-11-18T14:26:47.718Z] Copying: 512/512 [MB] (average 209 MBps) 00:24:55.644 00:24:55.644 00:24:55.644 real 0m8.172s 00:24:55.644 user 0m6.773s 00:24:55.644 sys 0m1.255s 00:24:55.644 14:26:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:55.644 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:24:55.644 ************************************ 00:24:55.644 END TEST dd_malloc_copy 00:24:55.644 ************************************ 00:24:55.644 ************************************ 00:24:55.644 END TEST spdk_dd_malloc 00:24:55.644 ************************************ 00:24:55.644 00:24:55.644 real 0m8.395s 00:24:55.644 user 0m6.900s 00:24:55.644 sys 0m1.359s 00:24:55.644 14:26:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:55.644 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:24:55.644 14:26:47 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:24:55.644 14:26:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:55.644 14:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.644 14:26:47 -- common/autotest_common.sh@10 -- # set +x 00:24:55.644 ************************************ 00:24:55.644 START TEST spdk_dd_bdev_to_bdev 00:24:55.644 ************************************ 00:24:55.644 14:26:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:24:55.644 * Looking for test storage... 00:24:55.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:55.644 14:26:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:55.644 14:26:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:55.644 14:26:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:55.644 14:26:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:55.644 14:26:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:55.644 14:26:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:55.644 14:26:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:55.644 14:26:47 -- scripts/common.sh@335 -- # IFS=.-: 00:24:55.644 14:26:47 -- scripts/common.sh@335 -- # read -ra ver1 00:24:55.644 14:26:47 -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.644 14:26:47 -- scripts/common.sh@336 -- # read -ra ver2 00:24:55.644 14:26:47 -- scripts/common.sh@337 -- # local 'op=<' 00:24:55.645 14:26:47 -- scripts/common.sh@339 -- # ver1_l=2 00:24:55.645 14:26:47 -- scripts/common.sh@340 -- # ver2_l=1 00:24:55.645 14:26:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:55.645 14:26:47 -- scripts/common.sh@343 -- # case "$op" in 00:24:55.645 14:26:47 -- scripts/common.sh@344 -- # : 1 00:24:55.645 14:26:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:55.645 14:26:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.645 14:26:47 -- scripts/common.sh@364 -- # decimal 1 00:24:55.645 14:26:47 -- scripts/common.sh@352 -- # local d=1 00:24:55.645 14:26:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.645 14:26:47 -- scripts/common.sh@354 -- # echo 1 00:24:55.645 14:26:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:55.645 14:26:47 -- scripts/common.sh@365 -- # decimal 2 00:24:55.645 14:26:47 -- scripts/common.sh@352 -- # local d=2 00:24:55.645 14:26:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.645 14:26:47 -- scripts/common.sh@354 -- # echo 2 00:24:55.645 14:26:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:55.645 14:26:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:55.645 14:26:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:55.645 14:26:47 -- scripts/common.sh@367 -- # return 0 00:24:55.645 14:26:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.645 14:26:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:55.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.645 --rc genhtml_branch_coverage=1 00:24:55.645 --rc genhtml_function_coverage=1 00:24:55.645 --rc genhtml_legend=1 00:24:55.645 --rc geninfo_all_blocks=1 00:24:55.645 --rc geninfo_unexecuted_blocks=1 00:24:55.645 00:24:55.645 ' 00:24:55.645 14:26:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:55.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.645 --rc genhtml_branch_coverage=1 00:24:55.645 --rc genhtml_function_coverage=1 00:24:55.645 --rc genhtml_legend=1 00:24:55.645 --rc geninfo_all_blocks=1 00:24:55.645 --rc geninfo_unexecuted_blocks=1 00:24:55.645 00:24:55.645 ' 00:24:55.645 14:26:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:55.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.645 --rc genhtml_branch_coverage=1 00:24:55.645 --rc genhtml_function_coverage=1 00:24:55.645 --rc genhtml_legend=1 00:24:55.645 --rc geninfo_all_blocks=1 00:24:55.645 --rc geninfo_unexecuted_blocks=1 00:24:55.645 00:24:55.645 ' 00:24:55.645 14:26:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:55.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.645 --rc genhtml_branch_coverage=1 00:24:55.645 --rc genhtml_function_coverage=1 00:24:55.645 --rc genhtml_legend=1 00:24:55.645 --rc geninfo_all_blocks=1 00:24:55.645 --rc geninfo_unexecuted_blocks=1 00:24:55.645 00:24:55.645 ' 00:24:55.645 14:26:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.645 14:26:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.645 14:26:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.645 14:26:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.645 14:26:47 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.645 14:26:47 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.645 14:26:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.645 14:26:47 -- paths/export.sh@5 -- # export PATH 00:24:55.645 14:26:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:24:55.645 14:26:47 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:24:55.904 [2024-11-18 14:26:47.757430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:55.904 [2024-11-18 14:26:47.757644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144518 ] 00:24:55.904 [2024-11-18 14:26:47.900682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.163 [2024-11-18 14:26:47.978405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.421  [2024-11-18T14:26:48.755Z] Copying: 256/256 [MB] (average 1098 MBps) 00:24:56.681 00:24:56.681 14:26:48 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:56.681 14:26:48 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:56.681 14:26:48 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:24:56.681 14:26:48 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:24:56.681 14:26:48 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:24:56.681 14:26:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:24:56.681 14:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.681 14:26:48 -- common/autotest_common.sh@10 -- # set +x 00:24:56.681 ************************************ 00:24:56.681 START TEST dd_inflate_file 00:24:56.681 ************************************ 00:24:56.681 14:26:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:24:56.681 [2024-11-18 14:26:48.745457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:56.681 [2024-11-18 14:26:48.745709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144541 ] 00:24:56.940 [2024-11-18 14:26:48.890085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.940 [2024-11-18 14:26:48.968298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.198  [2024-11-18T14:26:49.530Z] Copying: 64/64 [MB] (average 1084 MBps) 00:24:57.456 00:24:57.456 00:24:57.456 real 0m0.788s 00:24:57.456 user 0m0.348s 00:24:57.456 sys 0m0.296s 00:24:57.456 14:26:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:57.456 ************************************ 00:24:57.456 END TEST dd_inflate_file 00:24:57.456 ************************************ 00:24:57.456 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:57.456 14:26:49 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:24:57.715 14:26:49 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:24:57.715 14:26:49 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:24:57.715 14:26:49 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:24:57.715 14:26:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:24:57.715 14:26:49 -- dd/common.sh@31 -- # xtrace_disable 00:24:57.715 14:26:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.715 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:57.715 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:57.715 ************************************ 00:24:57.715 START TEST dd_copy_to_out_bdev 00:24:57.715 ************************************ 00:24:57.715 14:26:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:24:57.715 [2024-11-18 14:26:49.588767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:57.715 [2024-11-18 14:26:49.588964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144588 ] 00:24:57.715 { 00:24:57.715 "subsystems": [ 00:24:57.715 { 00:24:57.715 "subsystem": "bdev", 00:24:57.715 "config": [ 00:24:57.715 { 00:24:57.715 "params": { 00:24:57.715 "block_size": 4096, 00:24:57.715 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:24:57.715 "name": "aio1" 00:24:57.715 }, 00:24:57.715 "method": "bdev_aio_create" 00:24:57.715 }, 00:24:57.715 { 00:24:57.715 "params": { 00:24:57.715 "trtype": "pcie", 00:24:57.715 "traddr": "0000:00:06.0", 00:24:57.715 "name": "Nvme0" 00:24:57.715 }, 00:24:57.715 "method": "bdev_nvme_attach_controller" 00:24:57.715 }, 00:24:57.715 { 00:24:57.715 "method": "bdev_wait_for_examine" 00:24:57.715 } 00:24:57.715 ] 00:24:57.715 } 00:24:57.715 ] 00:24:57.715 } 00:24:57.715 [2024-11-18 14:26:49.726172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.974 [2024-11-18 14:26:49.802069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.353  [2024-11-18T14:26:51.686Z] Copying: 37/64 [MB] (37 MBps) [2024-11-18T14:26:51.945Z] Copying: 64/64 [MB] (average 40 MBps) 00:24:59.871 00:25:00.130 00:25:00.130 real 0m2.408s 00:25:00.130 user 0m2.004s 00:25:00.130 sys 0m0.279s 00:25:00.130 14:26:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:00.130 ************************************ 00:25:00.130 END TEST dd_copy_to_out_bdev 00:25:00.130 ************************************ 00:25:00.130 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.130 14:26:51 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:25:00.130 14:26:51 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:25:00.130 14:26:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:00.130 14:26:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:00.130 14:26:51 -- common/autotest_common.sh@10 -- # set +x 00:25:00.130 ************************************ 00:25:00.130 START TEST dd_offset_magic 00:25:00.130 ************************************ 00:25:00.130 14:26:52 -- common/autotest_common.sh@1114 -- # offset_magic 00:25:00.130 14:26:52 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:25:00.130 14:26:52 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:25:00.130 14:26:52 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:25:00.130 14:26:52 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:25:00.130 14:26:52 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:25:00.130 14:26:52 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:25:00.130 14:26:52 -- dd/common.sh@31 -- # xtrace_disable 00:25:00.130 14:26:52 -- common/autotest_common.sh@10 -- # set +x 00:25:00.130 [2024-11-18 14:26:52.062110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:00.130 { 00:25:00.130 "subsystems": [ 00:25:00.130 { 00:25:00.130 "subsystem": "bdev", 00:25:00.130 "config": [ 00:25:00.130 { 00:25:00.130 "params": { 00:25:00.130 "block_size": 4096, 00:25:00.130 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:00.130 "name": "aio1" 00:25:00.130 }, 00:25:00.130 "method": "bdev_aio_create" 00:25:00.130 }, 00:25:00.130 { 00:25:00.130 "params": { 00:25:00.130 "trtype": "pcie", 00:25:00.130 "traddr": "0000:00:06.0", 00:25:00.130 "name": "Nvme0" 00:25:00.130 }, 00:25:00.130 "method": "bdev_nvme_attach_controller" 00:25:00.130 }, 00:25:00.130 { 00:25:00.130 "method": "bdev_wait_for_examine" 00:25:00.130 } 00:25:00.130 ] 00:25:00.130 } 00:25:00.130 ] 00:25:00.130 } 00:25:00.130 [2024-11-18 14:26:52.063053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144646 ] 00:25:00.390 [2024-11-18 14:26:52.209257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.390 [2024-11-18 14:26:52.270710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.958  [2024-11-18T14:26:53.291Z] Copying: 65/65 [MB] (average 129 MBps) 00:25:01.217 00:25:01.217 14:26:53 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:25:01.217 14:26:53 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:25:01.217 14:26:53 -- dd/common.sh@31 -- # xtrace_disable 00:25:01.217 14:26:53 -- common/autotest_common.sh@10 -- # set +x 00:25:01.476 [2024-11-18 14:26:53.342143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:01.476 [2024-11-18 14:26:53.342649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144672 ] 00:25:01.476 { 00:25:01.476 "subsystems": [ 00:25:01.476 { 00:25:01.476 "subsystem": "bdev", 00:25:01.476 "config": [ 00:25:01.476 { 00:25:01.476 "params": { 00:25:01.476 "block_size": 4096, 00:25:01.476 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:01.476 "name": "aio1" 00:25:01.476 }, 00:25:01.476 "method": "bdev_aio_create" 00:25:01.476 }, 00:25:01.476 { 00:25:01.476 "params": { 00:25:01.476 "trtype": "pcie", 00:25:01.476 "traddr": "0000:00:06.0", 00:25:01.476 "name": "Nvme0" 00:25:01.476 }, 00:25:01.476 "method": "bdev_nvme_attach_controller" 00:25:01.476 }, 00:25:01.476 { 00:25:01.476 "method": "bdev_wait_for_examine" 00:25:01.476 } 00:25:01.476 ] 00:25:01.476 } 00:25:01.476 ] 00:25:01.476 } 00:25:01.476 [2024-11-18 14:26:53.492908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.735 [2024-11-18 14:26:53.573141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.735  [2024-11-18T14:26:54.377Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:25:02.303 00:25:02.303 14:26:54 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:25:02.303 14:26:54 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:25:02.303 14:26:54 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:25:02.303 14:26:54 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:25:02.303 14:26:54 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:25:02.303 14:26:54 -- dd/common.sh@31 -- # xtrace_disable 00:25:02.303 14:26:54 -- common/autotest_common.sh@10 -- # set +x 00:25:02.303 [2024-11-18 14:26:54.198146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:02.303 [2024-11-18 14:26:54.198575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144689 ] 00:25:02.303 { 00:25:02.303 "subsystems": [ 00:25:02.303 { 00:25:02.303 "subsystem": "bdev", 00:25:02.303 "config": [ 00:25:02.303 { 00:25:02.303 "params": { 00:25:02.303 "block_size": 4096, 00:25:02.303 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:02.303 "name": "aio1" 00:25:02.303 }, 00:25:02.303 "method": "bdev_aio_create" 00:25:02.303 }, 00:25:02.303 { 00:25:02.303 "params": { 00:25:02.303 "trtype": "pcie", 00:25:02.303 "traddr": "0000:00:06.0", 00:25:02.303 "name": "Nvme0" 00:25:02.303 }, 00:25:02.303 "method": "bdev_nvme_attach_controller" 00:25:02.303 }, 00:25:02.303 { 00:25:02.303 "method": "bdev_wait_for_examine" 00:25:02.303 } 00:25:02.303 ] 00:25:02.303 } 00:25:02.303 ] 00:25:02.303 } 00:25:02.303 [2024-11-18 14:26:54.344798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.562 [2024-11-18 14:26:54.403782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.130  [2024-11-18T14:26:55.464Z] Copying: 65/65 [MB] (average 174 MBps) 00:25:03.390 00:25:03.390 14:26:55 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:25:03.390 14:26:55 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:25:03.390 14:26:55 -- dd/common.sh@31 -- # xtrace_disable 00:25:03.390 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:25:03.390 [2024-11-18 14:26:55.351285] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:03.390 [2024-11-18 14:26:55.351744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144711 ] 00:25:03.390 { 00:25:03.390 "subsystems": [ 00:25:03.390 { 00:25:03.390 "subsystem": "bdev", 00:25:03.390 "config": [ 00:25:03.390 { 00:25:03.390 "params": { 00:25:03.390 "block_size": 4096, 00:25:03.390 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:03.390 "name": "aio1" 00:25:03.390 }, 00:25:03.390 "method": "bdev_aio_create" 00:25:03.390 }, 00:25:03.390 { 00:25:03.390 "params": { 00:25:03.390 "trtype": "pcie", 00:25:03.390 "traddr": "0000:00:06.0", 00:25:03.390 "name": "Nvme0" 00:25:03.390 }, 00:25:03.390 "method": "bdev_nvme_attach_controller" 00:25:03.390 }, 00:25:03.390 { 00:25:03.390 "method": "bdev_wait_for_examine" 00:25:03.390 } 00:25:03.390 ] 00:25:03.390 } 00:25:03.390 ] 00:25:03.390 } 00:25:03.648 [2024-11-18 14:26:55.500825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.648 [2024-11-18 14:26:55.580435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.907  [2024-11-18T14:26:56.240Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:25:04.166 00:25:04.166 ************************************ 00:25:04.166 END TEST dd_offset_magic 00:25:04.166 ************************************ 00:25:04.166 14:26:56 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:25:04.166 14:26:56 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:25:04.166 00:25:04.166 real 0m4.151s 00:25:04.166 user 0m1.957s 00:25:04.166 sys 0m1.010s 00:25:04.166 14:26:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:04.166 14:26:56 -- common/autotest_common.sh@10 -- # set +x 00:25:04.166 14:26:56 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:25:04.166 14:26:56 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:25:04.166 14:26:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:04.166 14:26:56 -- dd/common.sh@11 -- # local nvme_ref= 00:25:04.166 14:26:56 -- dd/common.sh@12 -- # local size=4194330 00:25:04.166 14:26:56 -- dd/common.sh@14 -- # local bs=1048576 00:25:04.166 14:26:56 -- dd/common.sh@15 -- # local count=5 00:25:04.166 14:26:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:25:04.166 14:26:56 -- dd/common.sh@18 -- # gen_conf 00:25:04.166 14:26:56 -- dd/common.sh@31 -- # xtrace_disable 00:25:04.166 14:26:56 -- common/autotest_common.sh@10 -- # set +x 00:25:04.425 [2024-11-18 14:26:56.257417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:04.425 [2024-11-18 14:26:56.257823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144748 ] 00:25:04.425 { 00:25:04.425 "subsystems": [ 00:25:04.425 { 00:25:04.425 "subsystem": "bdev", 00:25:04.425 "config": [ 00:25:04.425 { 00:25:04.425 "params": { 00:25:04.425 "block_size": 4096, 00:25:04.425 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:04.425 "name": "aio1" 00:25:04.425 }, 00:25:04.425 "method": "bdev_aio_create" 00:25:04.425 }, 00:25:04.425 { 00:25:04.425 "params": { 00:25:04.425 "trtype": "pcie", 00:25:04.425 "traddr": "0000:00:06.0", 00:25:04.425 "name": "Nvme0" 00:25:04.425 }, 00:25:04.425 "method": "bdev_nvme_attach_controller" 00:25:04.425 }, 00:25:04.425 { 00:25:04.425 "method": "bdev_wait_for_examine" 00:25:04.425 } 00:25:04.425 ] 00:25:04.425 } 00:25:04.425 ] 00:25:04.425 } 00:25:04.425 [2024-11-18 14:26:56.405626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.425 [2024-11-18 14:26:56.476266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.684  [2024-11-18T14:26:57.017Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:25:04.943 00:25:04.943 14:26:56 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:25:04.943 14:26:56 -- dd/common.sh@10 -- # local bdev=aio1 00:25:04.943 14:26:56 -- dd/common.sh@11 -- # local nvme_ref= 00:25:04.943 14:26:56 -- dd/common.sh@12 -- # local size=4194330 00:25:04.943 14:26:56 -- dd/common.sh@14 -- # local bs=1048576 00:25:04.943 14:26:56 -- dd/common.sh@15 -- # local count=5 00:25:04.943 14:26:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:25:04.943 14:26:56 -- dd/common.sh@18 -- # gen_conf 00:25:04.943 14:26:56 -- dd/common.sh@31 -- # xtrace_disable 00:25:04.943 14:26:56 -- common/autotest_common.sh@10 -- # set +x 00:25:04.943 [2024-11-18 14:26:56.984335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:04.943 { 00:25:04.943 "subsystems": [ 00:25:04.943 { 00:25:04.943 "subsystem": "bdev", 00:25:04.943 "config": [ 00:25:04.943 { 00:25:04.943 "params": { 00:25:04.943 "block_size": 4096, 00:25:04.943 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:25:04.943 "name": "aio1" 00:25:04.943 }, 00:25:04.943 "method": "bdev_aio_create" 00:25:04.943 }, 00:25:04.943 { 00:25:04.943 "params": { 00:25:04.943 "trtype": "pcie", 00:25:04.943 "traddr": "0000:00:06.0", 00:25:04.943 "name": "Nvme0" 00:25:04.943 }, 00:25:04.943 "method": "bdev_nvme_attach_controller" 00:25:04.943 }, 00:25:04.943 { 00:25:04.943 "method": "bdev_wait_for_examine" 00:25:04.943 } 00:25:04.943 ] 00:25:04.943 } 00:25:04.943 ] 00:25:04.943 } 00:25:04.943 [2024-11-18 14:26:56.984591] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144770 ] 00:25:05.202 [2024-11-18 14:26:57.129057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.202 [2024-11-18 14:26:57.193562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.462  [2024-11-18T14:26:57.795Z] Copying: 5120/5120 [kB] (average 227 MBps) 00:25:05.721 00:25:05.721 14:26:57 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:25:05.721 00:25:05.721 real 0m10.243s 00:25:05.721 user 0m5.739s 00:25:05.721 sys 0m2.658s 00:25:05.721 ************************************ 00:25:05.721 END TEST spdk_dd_bdev_to_bdev 00:25:05.721 ************************************ 00:25:05.721 14:26:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:05.721 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:25:05.980 14:26:57 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:25:05.980 14:26:57 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:25:05.980 14:26:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:05.980 14:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:05.980 14:26:57 -- common/autotest_common.sh@10 -- # set +x 00:25:05.981 ************************************ 00:25:05.981 START TEST spdk_dd_sparse 00:25:05.981 ************************************ 00:25:05.981 14:26:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:25:05.981 * Looking for test storage... 00:25:05.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:05.981 14:26:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:05.981 14:26:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:05.981 14:26:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:05.981 14:26:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:05.981 14:26:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:05.981 14:26:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:05.981 14:26:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:05.981 14:26:57 -- scripts/common.sh@335 -- # IFS=.-: 00:25:05.981 14:26:57 -- scripts/common.sh@335 -- # read -ra ver1 00:25:05.981 14:26:57 -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.981 14:26:57 -- scripts/common.sh@336 -- # read -ra ver2 00:25:05.981 14:26:57 -- scripts/common.sh@337 -- # local 'op=<' 00:25:05.981 14:26:57 -- scripts/common.sh@339 -- # ver1_l=2 00:25:05.981 14:26:57 -- scripts/common.sh@340 -- # ver2_l=1 00:25:05.981 14:26:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:05.981 14:26:57 -- scripts/common.sh@343 -- # case "$op" in 00:25:05.981 14:26:57 -- scripts/common.sh@344 -- # : 1 00:25:05.981 14:26:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:05.981 14:26:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.981 14:26:57 -- scripts/common.sh@364 -- # decimal 1 00:25:05.981 14:26:57 -- scripts/common.sh@352 -- # local d=1 00:25:05.981 14:26:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.981 14:26:57 -- scripts/common.sh@354 -- # echo 1 00:25:05.981 14:26:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:05.981 14:26:57 -- scripts/common.sh@365 -- # decimal 2 00:25:05.981 14:26:57 -- scripts/common.sh@352 -- # local d=2 00:25:05.981 14:26:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.981 14:26:57 -- scripts/common.sh@354 -- # echo 2 00:25:05.981 14:26:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:05.981 14:26:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:05.981 14:26:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:05.981 14:26:57 -- scripts/common.sh@367 -- # return 0 00:25:05.981 14:26:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.981 14:26:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:05.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.981 --rc genhtml_branch_coverage=1 00:25:05.981 --rc genhtml_function_coverage=1 00:25:05.981 --rc genhtml_legend=1 00:25:05.981 --rc geninfo_all_blocks=1 00:25:05.981 --rc geninfo_unexecuted_blocks=1 00:25:05.981 00:25:05.981 ' 00:25:05.981 14:26:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:05.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.981 --rc genhtml_branch_coverage=1 00:25:05.981 --rc genhtml_function_coverage=1 00:25:05.981 --rc genhtml_legend=1 00:25:05.981 --rc geninfo_all_blocks=1 00:25:05.981 --rc geninfo_unexecuted_blocks=1 00:25:05.981 00:25:05.981 ' 00:25:05.981 14:26:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:05.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.981 --rc genhtml_branch_coverage=1 00:25:05.981 --rc genhtml_function_coverage=1 00:25:05.981 --rc genhtml_legend=1 00:25:05.981 --rc geninfo_all_blocks=1 00:25:05.981 --rc geninfo_unexecuted_blocks=1 00:25:05.981 00:25:05.981 ' 00:25:05.981 14:26:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:05.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.981 --rc genhtml_branch_coverage=1 00:25:05.981 --rc genhtml_function_coverage=1 00:25:05.981 --rc genhtml_legend=1 00:25:05.981 --rc geninfo_all_blocks=1 00:25:05.981 --rc geninfo_unexecuted_blocks=1 00:25:05.981 00:25:05.981 ' 00:25:05.981 14:26:57 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:05.981 14:26:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.981 14:26:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.981 14:26:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.981 14:26:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.981 14:26:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.981 14:26:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.981 14:26:57 -- paths/export.sh@5 -- # export PATH 00:25:05.981 14:26:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:05.981 14:26:57 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:25:05.981 14:26:57 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:25:05.981 14:26:57 -- dd/sparse.sh@110 -- # file1=file_zero1 00:25:05.981 14:26:57 -- dd/sparse.sh@111 -- # file2=file_zero2 00:25:05.981 14:26:57 -- dd/sparse.sh@112 -- # file3=file_zero3 00:25:05.981 14:26:57 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:25:05.981 14:26:57 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:25:05.981 14:26:57 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:25:05.981 14:26:57 -- dd/sparse.sh@118 -- # prepare 00:25:05.981 14:26:57 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:25:05.981 14:26:57 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:25:05.981 1+0 records in 00:25:05.981 1+0 records out 00:25:05.981 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00909247 s, 461 MB/s 00:25:05.981 14:26:57 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:25:05.981 1+0 records in 00:25:05.981 1+0 records out 00:25:05.981 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00987236 s, 425 MB/s 00:25:05.981 14:26:58 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:25:05.981 1+0 records in 00:25:05.981 1+0 records out 00:25:05.981 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00736318 s, 570 MB/s 00:25:05.981 14:26:58 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:25:05.981 14:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:05.981 14:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:05.981 14:26:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.981 ************************************ 00:25:05.981 START TEST dd_sparse_file_to_file 00:25:05.981 ************************************ 00:25:05.981 14:26:58 -- common/autotest_common.sh@1114 -- # file_to_file 00:25:05.981 14:26:58 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:25:05.981 14:26:58 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:25:05.981 14:26:58 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:25:05.981 14:26:58 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:25:05.981 14:26:58 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:25:05.981 14:26:58 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:25:05.981 14:26:58 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:25:05.981 14:26:58 -- dd/sparse.sh@41 -- # gen_conf 00:25:05.981 14:26:58 -- dd/common.sh@31 -- # xtrace_disable 00:25:05.981 14:26:58 -- common/autotest_common.sh@10 -- # set +x 00:25:06.241 [2024-11-18 14:26:58.079784] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:06.241 [2024-11-18 14:26:58.080195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144859 ] 00:25:06.241 { 00:25:06.241 "subsystems": [ 00:25:06.241 { 00:25:06.241 "subsystem": "bdev", 00:25:06.241 "config": [ 00:25:06.241 { 00:25:06.241 "params": { 00:25:06.241 "block_size": 4096, 00:25:06.241 "filename": "dd_sparse_aio_disk", 00:25:06.241 "name": "dd_aio" 00:25:06.241 }, 00:25:06.241 "method": "bdev_aio_create" 00:25:06.241 }, 00:25:06.241 { 00:25:06.241 "params": { 00:25:06.241 "lvs_name": "dd_lvstore", 00:25:06.241 "bdev_name": "dd_aio" 00:25:06.241 }, 00:25:06.241 "method": "bdev_lvol_create_lvstore" 00:25:06.241 }, 00:25:06.241 { 00:25:06.241 "method": "bdev_wait_for_examine" 00:25:06.241 } 00:25:06.241 ] 00:25:06.241 } 00:25:06.241 ] 00:25:06.241 } 00:25:06.241 [2024-11-18 14:26:58.224426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.241 [2024-11-18 14:26:58.280724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.500  [2024-11-18T14:26:58.833Z] Copying: 12/36 [MB] (average 1090 MBps) 00:25:06.759 00:25:06.759 14:26:58 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:25:06.759 14:26:58 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:25:06.759 14:26:58 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:25:06.759 14:26:58 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:25:06.759 14:26:58 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:25:06.759 14:26:58 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:25:06.759 14:26:58 -- dd/sparse.sh@52 -- # stat1_b=24576 00:25:06.759 14:26:58 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:25:07.019 ************************************ 00:25:07.019 END TEST dd_sparse_file_to_file 00:25:07.019 ************************************ 00:25:07.019 14:26:58 -- dd/sparse.sh@53 -- # stat2_b=24576 00:25:07.019 14:26:58 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:25:07.019 00:25:07.019 real 0m0.807s 00:25:07.019 user 0m0.409s 00:25:07.019 sys 0m0.227s 00:25:07.019 14:26:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:07.019 14:26:58 -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 14:26:58 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:25:07.019 14:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:07.019 14:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:07.019 14:26:58 -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 ************************************ 00:25:07.019 START TEST dd_sparse_file_to_bdev 00:25:07.019 ************************************ 00:25:07.019 14:26:58 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:25:07.019 14:26:58 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:25:07.019 14:26:58 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:25:07.019 14:26:58 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:25:07.019 14:26:58 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:25:07.019 14:26:58 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:25:07.019 14:26:58 -- dd/sparse.sh@73 -- # gen_conf 00:25:07.019 14:26:58 -- dd/common.sh@31 -- # xtrace_disable 00:25:07.019 14:26:58 -- common/autotest_common.sh@10 -- # set +x 00:25:07.019 [2024-11-18 14:26:58.937337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:07.019 [2024-11-18 14:26:58.937754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144897 ] 00:25:07.019 { 00:25:07.019 "subsystems": [ 00:25:07.019 { 00:25:07.019 "subsystem": "bdev", 00:25:07.019 "config": [ 00:25:07.019 { 00:25:07.019 "params": { 00:25:07.019 "block_size": 4096, 00:25:07.019 "filename": "dd_sparse_aio_disk", 00:25:07.019 "name": "dd_aio" 00:25:07.019 }, 00:25:07.019 "method": "bdev_aio_create" 00:25:07.019 }, 00:25:07.019 { 00:25:07.019 "params": { 00:25:07.019 "lvs_name": "dd_lvstore", 00:25:07.019 "lvol_name": "dd_lvol", 00:25:07.019 "size": 37748736, 00:25:07.019 "thin_provision": true 00:25:07.019 }, 00:25:07.019 "method": "bdev_lvol_create" 00:25:07.019 }, 00:25:07.019 { 00:25:07.019 "method": "bdev_wait_for_examine" 00:25:07.019 } 00:25:07.019 ] 00:25:07.019 } 00:25:07.019 ] 00:25:07.019 } 00:25:07.019 [2024-11-18 14:26:59.085514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.278 [2024-11-18 14:26:59.168884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.278 [2024-11-18 14:26:59.269980] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:07.278  [2024-11-18T14:26:59.352Z] Copying: 12/36 [MB] (average 500 MBps)[2024-11-18 14:26:59.313385] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:07.844 00:25:07.844 00:25:07.844 ************************************ 00:25:07.844 END TEST dd_sparse_file_to_bdev 00:25:07.844 ************************************ 00:25:07.844 00:25:07.844 real 0m0.871s 00:25:07.844 user 0m0.534s 00:25:07.844 sys 0m0.231s 00:25:07.844 14:26:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:07.844 14:26:59 -- common/autotest_common.sh@10 -- # set +x 00:25:07.844 14:26:59 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:25:07.844 14:26:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:07.844 14:26:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:07.844 14:26:59 -- common/autotest_common.sh@10 -- # set +x 00:25:07.844 ************************************ 00:25:07.844 START TEST dd_sparse_bdev_to_file 00:25:07.844 ************************************ 00:25:07.844 14:26:59 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:25:07.844 14:26:59 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:25:07.844 14:26:59 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:25:07.844 14:26:59 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:25:07.844 14:26:59 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:25:07.844 14:26:59 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:25:07.844 14:26:59 -- dd/sparse.sh@91 -- # gen_conf 00:25:07.844 14:26:59 -- dd/common.sh@31 -- # xtrace_disable 00:25:07.844 14:26:59 -- common/autotest_common.sh@10 -- # set +x 00:25:07.844 { 00:25:07.844 "subsystems": [ 00:25:07.844 { 00:25:07.844 "subsystem": "bdev", 00:25:07.844 "config": [ 00:25:07.844 { 00:25:07.844 "params": { 00:25:07.844 "block_size": 4096, 00:25:07.844 "filename": "dd_sparse_aio_disk", 00:25:07.844 "name": "dd_aio" 00:25:07.844 }, 00:25:07.844 "method": "bdev_aio_create" 00:25:07.844 }, 00:25:07.844 { 00:25:07.844 "method": "bdev_wait_for_examine" 00:25:07.844 } 00:25:07.844 ] 00:25:07.844 } 00:25:07.844 ] 00:25:07.844 } 00:25:07.844 [2024-11-18 14:26:59.862815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:07.844 [2024-11-18 14:26:59.863211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144943 ] 00:25:08.102 [2024-11-18 14:27:00.009478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.102 [2024-11-18 14:27:00.092676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.361  [2024-11-18T14:27:00.693Z] Copying: 12/36 [MB] (average 800 MBps) 00:25:08.619 00:25:08.619 14:27:00 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:25:08.619 14:27:00 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:25:08.619 14:27:00 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:25:08.619 14:27:00 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:25:08.619 14:27:00 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:25:08.619 14:27:00 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:25:08.619 14:27:00 -- dd/sparse.sh@102 -- # stat2_b=24576 00:25:08.619 14:27:00 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:25:08.877 ************************************ 00:25:08.877 END TEST dd_sparse_bdev_to_file 00:25:08.877 ************************************ 00:25:08.877 14:27:00 -- dd/sparse.sh@103 -- # stat3_b=24576 00:25:08.877 14:27:00 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:25:08.877 00:25:08.877 real 0m0.889s 00:25:08.877 user 0m0.492s 00:25:08.877 sys 0m0.266s 00:25:08.877 14:27:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:08.877 14:27:00 -- common/autotest_common.sh@10 -- # set +x 00:25:08.877 14:27:00 -- dd/sparse.sh@1 -- # cleanup 00:25:08.877 14:27:00 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:25:08.877 14:27:00 -- dd/sparse.sh@12 -- # rm file_zero1 00:25:08.877 14:27:00 -- dd/sparse.sh@13 -- # rm file_zero2 00:25:08.877 14:27:00 -- dd/sparse.sh@14 -- # rm file_zero3 00:25:08.877 ************************************ 00:25:08.877 END TEST spdk_dd_sparse 00:25:08.877 ************************************ 00:25:08.877 00:25:08.877 real 0m2.945s 00:25:08.877 user 0m1.645s 00:25:08.877 sys 0m0.892s 00:25:08.877 14:27:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:08.877 14:27:00 -- common/autotest_common.sh@10 -- # set +x 00:25:08.877 14:27:00 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:25:08.877 14:27:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:08.877 14:27:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:08.877 14:27:00 -- common/autotest_common.sh@10 -- # set +x 00:25:08.877 ************************************ 00:25:08.877 START TEST spdk_dd_negative 00:25:08.877 ************************************ 00:25:08.877 14:27:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:25:08.878 * Looking for test storage... 00:25:08.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:08.878 14:27:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:08.878 14:27:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:08.878 14:27:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:09.137 14:27:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:09.137 14:27:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:09.137 14:27:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:09.137 14:27:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:09.137 14:27:00 -- scripts/common.sh@335 -- # IFS=.-: 00:25:09.137 14:27:00 -- scripts/common.sh@335 -- # read -ra ver1 00:25:09.137 14:27:00 -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.137 14:27:00 -- scripts/common.sh@336 -- # read -ra ver2 00:25:09.137 14:27:00 -- scripts/common.sh@337 -- # local 'op=<' 00:25:09.137 14:27:00 -- scripts/common.sh@339 -- # ver1_l=2 00:25:09.137 14:27:00 -- scripts/common.sh@340 -- # ver2_l=1 00:25:09.137 14:27:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:09.137 14:27:00 -- scripts/common.sh@343 -- # case "$op" in 00:25:09.137 14:27:00 -- scripts/common.sh@344 -- # : 1 00:25:09.137 14:27:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:09.137 14:27:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.137 14:27:00 -- scripts/common.sh@364 -- # decimal 1 00:25:09.137 14:27:01 -- scripts/common.sh@352 -- # local d=1 00:25:09.137 14:27:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.137 14:27:01 -- scripts/common.sh@354 -- # echo 1 00:25:09.137 14:27:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:09.137 14:27:01 -- scripts/common.sh@365 -- # decimal 2 00:25:09.137 14:27:01 -- scripts/common.sh@352 -- # local d=2 00:25:09.137 14:27:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.137 14:27:01 -- scripts/common.sh@354 -- # echo 2 00:25:09.138 14:27:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:09.138 14:27:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:09.138 14:27:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:09.138 14:27:01 -- scripts/common.sh@367 -- # return 0 00:25:09.138 14:27:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.138 14:27:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:09.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.138 --rc genhtml_branch_coverage=1 00:25:09.138 --rc genhtml_function_coverage=1 00:25:09.138 --rc genhtml_legend=1 00:25:09.138 --rc geninfo_all_blocks=1 00:25:09.138 --rc geninfo_unexecuted_blocks=1 00:25:09.138 00:25:09.138 ' 00:25:09.138 14:27:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:09.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.138 --rc genhtml_branch_coverage=1 00:25:09.138 --rc genhtml_function_coverage=1 00:25:09.138 --rc genhtml_legend=1 00:25:09.138 --rc geninfo_all_blocks=1 00:25:09.138 --rc geninfo_unexecuted_blocks=1 00:25:09.138 00:25:09.138 ' 00:25:09.138 14:27:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:09.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.138 --rc genhtml_branch_coverage=1 00:25:09.138 --rc genhtml_function_coverage=1 00:25:09.138 --rc genhtml_legend=1 00:25:09.138 --rc geninfo_all_blocks=1 00:25:09.138 --rc geninfo_unexecuted_blocks=1 00:25:09.138 00:25:09.138 ' 00:25:09.138 14:27:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:09.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.138 --rc genhtml_branch_coverage=1 00:25:09.138 --rc genhtml_function_coverage=1 00:25:09.138 --rc genhtml_legend=1 00:25:09.138 --rc geninfo_all_blocks=1 00:25:09.138 --rc geninfo_unexecuted_blocks=1 00:25:09.138 00:25:09.138 ' 00:25:09.138 14:27:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:09.138 14:27:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.138 14:27:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.138 14:27:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.138 14:27:01 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:09.138 14:27:01 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:09.138 14:27:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:09.138 14:27:01 -- paths/export.sh@5 -- # export PATH 00:25:09.138 14:27:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:09.138 14:27:01 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:09.138 14:27:01 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:09.138 14:27:01 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:09.138 14:27:01 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:09.138 14:27:01 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:25:09.138 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.138 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.138 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.138 ************************************ 00:25:09.138 START TEST dd_invalid_arguments 00:25:09.138 ************************************ 00:25:09.138 14:27:01 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:25:09.138 14:27:01 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:25:09.138 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:09.138 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:25:09.138 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.138 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.138 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.138 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.138 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.138 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.138 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.138 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:09.138 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:25:09.138 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:25:09.138 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:25:09.138 options: 00:25:09.138 -c, --config JSON config file (default none) 00:25:09.138 --json JSON config file (default none) 00:25:09.138 --json-ignore-init-errors 00:25:09.138 don't exit on invalid config entry 00:25:09.138 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:25:09.138 -g, --single-file-segments 00:25:09.138 force creating just one hugetlbfs file 00:25:09.138 -h, --help show this usage 00:25:09.138 -i, --shm-id shared memory ID (optional) 00:25:09.138 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:25:09.138 --lcores lcore to CPU mapping list. The list is in the format: 00:25:09.138 [<,lcores[@CPUs]>...] 00:25:09.138 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:25:09.138 Within the group, '-' is used for range separator, 00:25:09.138 ',' is used for single number separator. 00:25:09.138 '( )' can be omitted for single element group, 00:25:09.138 '@' can be omitted if cpus and lcores have the same value 00:25:09.138 -n, --mem-channels channel number of memory channels used for DPDK 00:25:09.138 -p, --main-core main (primary) core for DPDK 00:25:09.138 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:25:09.138 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:25:09.138 --disable-cpumask-locks Disable CPU core lock files. 00:25:09.138 --silence-noticelog disable notice level logging to stderr 00:25:09.138 --msg-mempool-size global message memory pool size in count (default: 262143) 00:25:09.138 -u, --no-pci disable PCI access 00:25:09.138 --wait-for-rpc wait for RPCs to initialize subsystems 00:25:09.138 --max-delay maximum reactor delay (in microseconds) 00:25:09.138 -B, --pci-blocked pci addr to block (can be used more than once) 00:25:09.138 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:25:09.138 -R, --huge-unlink unlink huge files after initialization 00:25:09.138 -v, --version print SPDK version 00:25:09.138 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:25:09.138 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:25:09.138 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:25:09.138 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:25:09.138 Tracepoints vary in size and can use more than one trace entry. 00:25:09.138 --rpcs-allowed comma-separated list of permitted RPCS 00:25:09.138 --env-context Opaque context for use of the env implementation 00:25:09.138 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:25:09.138 --no-huge run without using hugepages 00:25:09.138 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:25:09.138 -e, --tpoint-group [:] 00:25:09.138 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:25:09.138 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:25:09.138 Groups and [2024-11-18 14:27:01.095145] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:25:09.138 masks can be combined (e.g. thread,bdev:0x1). 00:25:09.139 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:25:09.139 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:25:09.139 [--------- DD Options ---------] 00:25:09.139 --if Input file. Must specify either --if or --ib. 00:25:09.139 --ib Input bdev. Must specifier either --if or --ib 00:25:09.139 --of Output file. Must specify either --of or --ob. 00:25:09.139 --ob Output bdev. Must specify either --of or --ob. 00:25:09.139 --iflag Input file flags. 00:25:09.139 --oflag Output file flags. 00:25:09.139 --bs I/O unit size (default: 4096) 00:25:09.139 --qd Queue depth (default: 2) 00:25:09.139 --count I/O unit count. The number of I/O units to copy. (default: all) 00:25:09.139 --skip Skip this many I/O units at start of input. (default: 0) 00:25:09.139 --seek Skip this many I/O units at start of output. (default: 0) 00:25:09.139 --aio Force usage of AIO. (by default io_uring is used if available) 00:25:09.139 --sparse Enable hole skipping in input target 00:25:09.139 Available iflag and oflag values: 00:25:09.139 append - append mode 00:25:09.139 direct - use direct I/O for data 00:25:09.139 directory - fail unless a directory 00:25:09.139 dsync - use synchronized I/O for data 00:25:09.139 noatime - do not update access time 00:25:09.139 noctty - do not assign controlling terminal from file 00:25:09.139 nofollow - do not follow symlinks 00:25:09.139 nonblock - use non-blocking I/O 00:25:09.139 sync - use synchronized I/O for data and metadata 00:25:09.139 14:27:01 -- common/autotest_common.sh@653 -- # es=2 00:25:09.139 14:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.139 14:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.139 14:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.139 00:25:09.139 real 0m0.103s 00:25:09.139 user 0m0.049s 00:25:09.139 sys 0m0.052s 00:25:09.139 14:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:09.139 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.139 ************************************ 00:25:09.139 END TEST dd_invalid_arguments 00:25:09.139 ************************************ 00:25:09.139 14:27:01 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:25:09.139 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.139 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.139 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.139 ************************************ 00:25:09.139 START TEST dd_double_input 00:25:09.139 ************************************ 00:25:09.139 14:27:01 -- common/autotest_common.sh@1114 -- # double_input 00:25:09.139 14:27:01 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:25:09.139 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:09.139 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:25:09.139 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.139 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.139 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.139 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.139 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.139 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.139 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.139 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:09.139 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:25:09.398 [2024-11-18 14:27:01.257582] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:25:09.398 14:27:01 -- common/autotest_common.sh@653 -- # es=22 00:25:09.398 14:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.398 14:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.398 14:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.398 00:25:09.398 real 0m0.102s 00:25:09.398 user 0m0.054s 00:25:09.398 sys 0m0.046s 00:25:09.398 14:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:09.398 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.398 ************************************ 00:25:09.398 END TEST dd_double_input 00:25:09.398 ************************************ 00:25:09.398 14:27:01 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:25:09.398 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.398 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.398 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.398 ************************************ 00:25:09.398 START TEST dd_double_output 00:25:09.398 ************************************ 00:25:09.398 14:27:01 -- common/autotest_common.sh@1114 -- # double_output 00:25:09.398 14:27:01 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:25:09.398 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:09.398 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:25:09.398 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.398 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.398 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.398 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.398 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.398 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.398 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.398 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:09.398 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:25:09.398 [2024-11-18 14:27:01.421534] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:25:09.398 14:27:01 -- common/autotest_common.sh@653 -- # es=22 00:25:09.398 14:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.398 14:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.398 14:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.398 00:25:09.398 real 0m0.106s 00:25:09.398 user 0m0.053s 00:25:09.398 sys 0m0.052s 00:25:09.398 14:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:09.398 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.398 ************************************ 00:25:09.398 END TEST dd_double_output 00:25:09.398 ************************************ 00:25:09.658 14:27:01 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:25:09.658 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.658 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.658 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.658 ************************************ 00:25:09.658 START TEST dd_no_input 00:25:09.658 ************************************ 00:25:09.658 14:27:01 -- common/autotest_common.sh@1114 -- # no_input 00:25:09.658 14:27:01 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:25:09.658 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:09.658 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:25:09.658 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.658 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.658 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:09.658 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:25:09.658 [2024-11-18 14:27:01.578234] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:25:09.658 14:27:01 -- common/autotest_common.sh@653 -- # es=22 00:25:09.658 14:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.658 14:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.658 14:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.658 00:25:09.658 real 0m0.092s 00:25:09.658 user 0m0.057s 00:25:09.658 sys 0m0.033s 00:25:09.658 14:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:09.658 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.658 ************************************ 00:25:09.658 END TEST dd_no_input 00:25:09.658 ************************************ 00:25:09.658 14:27:01 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:25:09.658 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.658 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.658 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.658 ************************************ 00:25:09.658 START TEST dd_no_output 00:25:09.658 ************************************ 00:25:09.658 14:27:01 -- common/autotest_common.sh@1114 -- # no_output 00:25:09.658 14:27:01 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:09.658 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:09.658 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:09.658 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.658 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.658 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.658 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:09.658 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:09.918 [2024-11-18 14:27:01.737489] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:25:09.918 14:27:01 -- common/autotest_common.sh@653 -- # es=22 00:25:09.918 14:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.918 14:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.918 14:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.918 00:25:09.918 real 0m0.105s 00:25:09.918 user 0m0.056s 00:25:09.918 sys 0m0.048s 00:25:09.918 14:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:09.918 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.918 ************************************ 00:25:09.918 END TEST dd_no_output 00:25:09.918 ************************************ 00:25:09.918 14:27:01 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:25:09.918 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.918 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.918 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.918 ************************************ 00:25:09.918 START TEST dd_wrong_blocksize 00:25:09.918 ************************************ 00:25:09.918 14:27:01 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:25:09.918 14:27:01 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:25:09.918 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:09.918 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:25:09.918 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.918 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.918 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.918 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.918 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.918 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.918 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:09.918 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:09.918 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:25:09.918 [2024-11-18 14:27:01.897851] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:25:09.918 14:27:01 -- common/autotest_common.sh@653 -- # es=22 00:25:09.918 14:27:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.918 14:27:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.918 14:27:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.918 00:25:09.918 real 0m0.093s 00:25:09.918 user 0m0.052s 00:25:09.918 sys 0m0.039s 00:25:09.918 14:27:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:09.918 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:09.918 ************************************ 00:25:09.918 END TEST dd_wrong_blocksize 00:25:09.918 ************************************ 00:25:09.918 14:27:01 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:25:09.918 14:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:09.918 14:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:09.918 14:27:01 -- common/autotest_common.sh@10 -- # set +x 00:25:10.178 ************************************ 00:25:10.178 START TEST dd_smaller_blocksize 00:25:10.178 ************************************ 00:25:10.178 14:27:01 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:25:10.178 14:27:01 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:25:10.178 14:27:01 -- common/autotest_common.sh@650 -- # local es=0 00:25:10.178 14:27:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:25:10.178 14:27:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.178 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.178 14:27:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.178 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.178 14:27:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.178 14:27:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.178 14:27:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.178 14:27:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:10.178 14:27:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:25:10.178 [2024-11-18 14:27:02.054571] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:10.178 [2024-11-18 14:27:02.054972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145210 ] 00:25:10.178 [2024-11-18 14:27:02.205373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.436 [2024-11-18 14:27:02.295242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.436 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:25:10.436 [2024-11-18 14:27:02.505293] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:25:10.436 [2024-11-18 14:27:02.505828] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:10.695 [2024-11-18 14:27:02.677875] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:10.954 14:27:02 -- common/autotest_common.sh@653 -- # es=244 00:25:10.954 14:27:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.954 14:27:02 -- common/autotest_common.sh@662 -- # es=116 00:25:10.954 14:27:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:10.954 14:27:02 -- common/autotest_common.sh@670 -- # es=1 00:25:10.954 14:27:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.954 00:25:10.954 real 0m0.810s 00:25:10.954 user 0m0.387s 00:25:10.954 sys 0m0.317s 00:25:10.954 14:27:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:10.954 14:27:02 -- common/autotest_common.sh@10 -- # set +x 00:25:10.954 ************************************ 00:25:10.954 END TEST dd_smaller_blocksize 00:25:10.954 ************************************ 00:25:10.954 14:27:02 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:25:10.954 14:27:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:10.954 14:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:10.954 14:27:02 -- common/autotest_common.sh@10 -- # set +x 00:25:10.954 ************************************ 00:25:10.954 START TEST dd_invalid_count 00:25:10.954 ************************************ 00:25:10.954 14:27:02 -- common/autotest_common.sh@1114 -- # invalid_count 00:25:10.954 14:27:02 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:25:10.954 14:27:02 -- common/autotest_common.sh@650 -- # local es=0 00:25:10.954 14:27:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:25:10.954 14:27:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.954 14:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.954 14:27:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.954 14:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.954 14:27:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.954 14:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.954 14:27:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.954 14:27:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:10.954 14:27:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:25:10.954 [2024-11-18 14:27:02.918462] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:25:10.954 14:27:02 -- common/autotest_common.sh@653 -- # es=22 00:25:10.954 14:27:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:10.954 14:27:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:10.954 14:27:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:10.954 00:25:10.954 real 0m0.108s 00:25:10.954 user 0m0.071s 00:25:10.954 sys 0m0.033s 00:25:10.954 14:27:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:10.954 14:27:02 -- common/autotest_common.sh@10 -- # set +x 00:25:10.954 ************************************ 00:25:10.954 END TEST dd_invalid_count 00:25:10.954 ************************************ 00:25:10.954 14:27:03 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:25:10.954 14:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:10.954 14:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:10.954 14:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:10.954 ************************************ 00:25:10.954 START TEST dd_invalid_oflag 00:25:10.954 ************************************ 00:25:11.213 14:27:03 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:25:11.213 14:27:03 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:25:11.213 14:27:03 -- common/autotest_common.sh@650 -- # local es=0 00:25:11.213 14:27:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:25:11.213 14:27:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.214 14:27:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.214 14:27:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:11.214 14:27:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:25:11.214 [2024-11-18 14:27:03.082429] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:25:11.214 14:27:03 -- common/autotest_common.sh@653 -- # es=22 00:25:11.214 14:27:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.214 14:27:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.214 14:27:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.214 00:25:11.214 real 0m0.100s 00:25:11.214 user 0m0.052s 00:25:11.214 sys 0m0.046s 00:25:11.214 14:27:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:11.214 14:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:11.214 ************************************ 00:25:11.214 END TEST dd_invalid_oflag 00:25:11.214 ************************************ 00:25:11.214 14:27:03 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:25:11.214 14:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:11.214 14:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:11.214 14:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:11.214 ************************************ 00:25:11.214 START TEST dd_invalid_iflag 00:25:11.214 ************************************ 00:25:11.214 14:27:03 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:25:11.214 14:27:03 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:25:11.214 14:27:03 -- common/autotest_common.sh@650 -- # local es=0 00:25:11.214 14:27:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:25:11.214 14:27:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.214 14:27:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.214 14:27:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.214 14:27:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:11.214 14:27:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:25:11.214 [2024-11-18 14:27:03.237640] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:25:11.214 14:27:03 -- common/autotest_common.sh@653 -- # es=22 00:25:11.214 14:27:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.214 14:27:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.214 14:27:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.214 00:25:11.214 real 0m0.099s 00:25:11.214 user 0m0.040s 00:25:11.214 sys 0m0.057s 00:25:11.214 14:27:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:11.214 14:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:11.214 ************************************ 00:25:11.214 END TEST dd_invalid_iflag 00:25:11.214 ************************************ 00:25:11.473 14:27:03 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:25:11.473 14:27:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:11.473 14:27:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:11.473 14:27:03 -- common/autotest_common.sh@10 -- # set +x 00:25:11.473 ************************************ 00:25:11.473 START TEST dd_unknown_flag 00:25:11.473 ************************************ 00:25:11.473 14:27:03 -- common/autotest_common.sh@1114 -- # unknown_flag 00:25:11.473 14:27:03 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:25:11.473 14:27:03 -- common/autotest_common.sh@650 -- # local es=0 00:25:11.473 14:27:03 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:25:11.473 14:27:03 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.473 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.473 14:27:03 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.473 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.473 14:27:03 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.473 14:27:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.473 14:27:03 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.473 14:27:03 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:11.473 14:27:03 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:25:11.473 [2024-11-18 14:27:03.397979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:11.473 [2024-11-18 14:27:03.398776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145332 ] 00:25:11.473 [2024-11-18 14:27:03.537445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.733 [2024-11-18 14:27:03.621305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.733 [2024-11-18 14:27:03.733965] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:25:11.733 [2024-11-18 14:27:03.734673] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:25:11.733 [2024-11-18 14:27:03.734963] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:25:11.733 [2024-11-18 14:27:03.735277] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:11.991 [2024-11-18 14:27:03.899403] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:11.992 14:27:04 -- common/autotest_common.sh@653 -- # es=236 00:25:11.992 14:27:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.992 14:27:04 -- common/autotest_common.sh@662 -- # es=108 00:25:11.992 14:27:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:11.992 14:27:04 -- common/autotest_common.sh@670 -- # es=1 00:25:11.992 14:27:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.992 00:25:11.992 real 0m0.687s 00:25:11.992 user 0m0.367s 00:25:11.992 sys 0m0.214s 00:25:11.992 14:27:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:11.992 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.992 ************************************ 00:25:11.992 END TEST dd_unknown_flag 00:25:11.992 ************************************ 00:25:12.251 14:27:04 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:25:12.251 14:27:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:12.251 14:27:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:12.251 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:12.251 ************************************ 00:25:12.251 START TEST dd_invalid_json 00:25:12.251 ************************************ 00:25:12.251 14:27:04 -- common/autotest_common.sh@1114 -- # invalid_json 00:25:12.251 14:27:04 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:25:12.251 14:27:04 -- dd/negative_dd.sh@95 -- # : 00:25:12.251 14:27:04 -- common/autotest_common.sh@650 -- # local es=0 00:25:12.251 14:27:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:25:12.251 14:27:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.251 14:27:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.251 14:27:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.251 14:27:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.251 14:27:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.251 14:27:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.251 14:27:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:12.251 14:27:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:12.251 14:27:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:25:12.251 [2024-11-18 14:27:04.135666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:12.251 [2024-11-18 14:27:04.136667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145368 ] 00:25:12.251 [2024-11-18 14:27:04.276758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.510 [2024-11-18 14:27:04.353395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.510 [2024-11-18 14:27:04.354310] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:25:12.510 [2024-11-18 14:27:04.354668] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:12.510 [2024-11-18 14:27:04.355017] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:12.510 14:27:04 -- common/autotest_common.sh@653 -- # es=234 00:25:12.510 14:27:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:12.510 14:27:04 -- common/autotest_common.sh@662 -- # es=106 00:25:12.510 14:27:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:12.510 14:27:04 -- common/autotest_common.sh@670 -- # es=1 00:25:12.510 14:27:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:12.510 00:25:12.510 real 0m0.384s 00:25:12.510 user 0m0.156s 00:25:12.510 sys 0m0.125s 00:25:12.510 14:27:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:12.510 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:12.510 ************************************ 00:25:12.510 END TEST dd_invalid_json 00:25:12.510 ************************************ 00:25:12.510 00:25:12.510 real 0m3.682s 00:25:12.510 user 0m1.887s 00:25:12.510 sys 0m1.407s 00:25:12.510 14:27:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:12.510 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:12.510 ************************************ 00:25:12.510 END TEST spdk_dd_negative 00:25:12.510 ************************************ 00:25:12.510 00:25:12.510 real 1m10.410s 00:25:12.510 user 0m39.966s 00:25:12.510 sys 0m19.754s 00:25:12.510 14:27:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:12.510 ************************************ 00:25:12.510 END TEST spdk_dd 00:25:12.510 ************************************ 00:25:12.510 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:12.770 14:27:04 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:25:12.770 14:27:04 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:25:12.770 14:27:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:12.770 14:27:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:12.770 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:12.770 ************************************ 00:25:12.770 START TEST blockdev_nvme 00:25:12.770 ************************************ 00:25:12.770 14:27:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:25:12.770 * Looking for test storage... 00:25:12.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:12.770 14:27:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:12.770 14:27:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:12.770 14:27:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:12.770 14:27:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:12.770 14:27:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:12.770 14:27:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:12.770 14:27:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:12.770 14:27:04 -- scripts/common.sh@335 -- # IFS=.-: 00:25:12.770 14:27:04 -- scripts/common.sh@335 -- # read -ra ver1 00:25:12.770 14:27:04 -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.770 14:27:04 -- scripts/common.sh@336 -- # read -ra ver2 00:25:12.770 14:27:04 -- scripts/common.sh@337 -- # local 'op=<' 00:25:12.770 14:27:04 -- scripts/common.sh@339 -- # ver1_l=2 00:25:12.770 14:27:04 -- scripts/common.sh@340 -- # ver2_l=1 00:25:12.770 14:27:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:12.770 14:27:04 -- scripts/common.sh@343 -- # case "$op" in 00:25:12.770 14:27:04 -- scripts/common.sh@344 -- # : 1 00:25:12.770 14:27:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:12.770 14:27:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.770 14:27:04 -- scripts/common.sh@364 -- # decimal 1 00:25:12.770 14:27:04 -- scripts/common.sh@352 -- # local d=1 00:25:12.770 14:27:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.770 14:27:04 -- scripts/common.sh@354 -- # echo 1 00:25:12.770 14:27:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:12.770 14:27:04 -- scripts/common.sh@365 -- # decimal 2 00:25:12.770 14:27:04 -- scripts/common.sh@352 -- # local d=2 00:25:12.770 14:27:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.770 14:27:04 -- scripts/common.sh@354 -- # echo 2 00:25:12.770 14:27:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:12.770 14:27:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:12.770 14:27:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:12.770 14:27:04 -- scripts/common.sh@367 -- # return 0 00:25:12.770 14:27:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.770 14:27:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:12.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.770 --rc genhtml_branch_coverage=1 00:25:12.770 --rc genhtml_function_coverage=1 00:25:12.770 --rc genhtml_legend=1 00:25:12.770 --rc geninfo_all_blocks=1 00:25:12.770 --rc geninfo_unexecuted_blocks=1 00:25:12.770 00:25:12.770 ' 00:25:12.770 14:27:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:12.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.770 --rc genhtml_branch_coverage=1 00:25:12.770 --rc genhtml_function_coverage=1 00:25:12.770 --rc genhtml_legend=1 00:25:12.770 --rc geninfo_all_blocks=1 00:25:12.770 --rc geninfo_unexecuted_blocks=1 00:25:12.770 00:25:12.770 ' 00:25:12.770 14:27:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:12.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.770 --rc genhtml_branch_coverage=1 00:25:12.770 --rc genhtml_function_coverage=1 00:25:12.770 --rc genhtml_legend=1 00:25:12.770 --rc geninfo_all_blocks=1 00:25:12.770 --rc geninfo_unexecuted_blocks=1 00:25:12.770 00:25:12.770 ' 00:25:12.770 14:27:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:12.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.770 --rc genhtml_branch_coverage=1 00:25:12.770 --rc genhtml_function_coverage=1 00:25:12.770 --rc genhtml_legend=1 00:25:12.770 --rc geninfo_all_blocks=1 00:25:12.770 --rc geninfo_unexecuted_blocks=1 00:25:12.770 00:25:12.770 ' 00:25:12.770 14:27:04 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:12.770 14:27:04 -- bdev/nbd_common.sh@6 -- # set -e 00:25:12.770 14:27:04 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:12.770 14:27:04 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:12.770 14:27:04 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:12.770 14:27:04 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:12.770 14:27:04 -- bdev/blockdev.sh@18 -- # : 00:25:12.770 14:27:04 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:25:12.770 14:27:04 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:25:12.770 14:27:04 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:25:12.770 14:27:04 -- bdev/blockdev.sh@672 -- # uname -s 00:25:12.770 14:27:04 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:25:12.770 14:27:04 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:25:12.770 14:27:04 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:25:12.770 14:27:04 -- bdev/blockdev.sh@681 -- # crypto_device= 00:25:12.770 14:27:04 -- bdev/blockdev.sh@682 -- # dek= 00:25:12.770 14:27:04 -- bdev/blockdev.sh@683 -- # env_ctx= 00:25:12.770 14:27:04 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:25:12.770 14:27:04 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:25:12.770 14:27:04 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:25:12.770 14:27:04 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:25:12.770 14:27:04 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:25:12.770 14:27:04 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=145461 00:25:12.770 14:27:04 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:12.770 14:27:04 -- bdev/blockdev.sh@47 -- # waitforlisten 145461 00:25:12.771 14:27:04 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:12.771 14:27:04 -- common/autotest_common.sh@829 -- # '[' -z 145461 ']' 00:25:12.771 14:27:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.771 14:27:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.771 14:27:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.771 14:27:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.771 14:27:04 -- common/autotest_common.sh@10 -- # set +x 00:25:12.771 [2024-11-18 14:27:04.828844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:12.771 [2024-11-18 14:27:04.829078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145461 ] 00:25:13.030 [2024-11-18 14:27:04.973669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.030 [2024-11-18 14:27:05.045305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:13.030 [2024-11-18 14:27:05.046150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.967 14:27:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.967 14:27:05 -- common/autotest_common.sh@862 -- # return 0 00:25:13.967 14:27:05 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:25:13.967 14:27:05 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:25:13.967 14:27:05 -- bdev/blockdev.sh@79 -- # local json 00:25:13.967 14:27:05 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:25:13.967 14:27:05 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:13.967 14:27:05 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:25:13.967 14:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.967 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:13.967 14:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.967 14:27:05 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:25:13.967 14:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.967 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:13.967 14:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.967 14:27:05 -- bdev/blockdev.sh@738 -- # cat 00:25:13.967 14:27:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:25:13.967 14:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.968 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 14:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.968 14:27:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:25:13.968 14:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.968 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 14:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.968 14:27:05 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:13.968 14:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.968 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 14:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.968 14:27:05 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:25:13.968 14:27:05 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:25:13.968 14:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.968 14:27:05 -- common/autotest_common.sh@10 -- # set +x 00:25:13.968 14:27:05 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:25:13.968 14:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.968 14:27:06 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:25:13.968 14:27:06 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f73772b6-49a1-45d5-9f11-3876b535e074"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f73772b6-49a1-45d5-9f11-3876b535e074",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:25:13.968 14:27:06 -- bdev/blockdev.sh@747 -- # jq -r .name 00:25:14.227 14:27:06 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:25:14.227 14:27:06 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:25:14.227 14:27:06 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:25:14.227 14:27:06 -- bdev/blockdev.sh@752 -- # killprocess 145461 00:25:14.227 14:27:06 -- common/autotest_common.sh@936 -- # '[' -z 145461 ']' 00:25:14.227 14:27:06 -- common/autotest_common.sh@940 -- # kill -0 145461 00:25:14.227 14:27:06 -- common/autotest_common.sh@941 -- # uname 00:25:14.227 14:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.227 14:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145461 00:25:14.227 14:27:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:14.227 14:27:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:14.227 14:27:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145461' 00:25:14.227 killing process with pid 145461 00:25:14.227 14:27:06 -- common/autotest_common.sh@955 -- # kill 145461 00:25:14.227 14:27:06 -- common/autotest_common.sh@960 -- # wait 145461 00:25:14.486 14:27:06 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:14.486 14:27:06 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:14.486 14:27:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:25:14.486 14:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:14.486 14:27:06 -- common/autotest_common.sh@10 -- # set +x 00:25:14.486 ************************************ 00:25:14.486 START TEST bdev_hello_world 00:25:14.486 ************************************ 00:25:14.486 14:27:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:25:14.747 [2024-11-18 14:27:06.565928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:14.747 [2024-11-18 14:27:06.566156] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145528 ] 00:25:14.747 [2024-11-18 14:27:06.712573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.747 [2024-11-18 14:27:06.803076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.007 [2024-11-18 14:27:07.008499] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:15.007 [2024-11-18 14:27:07.008892] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:25:15.007 [2024-11-18 14:27:07.009180] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:15.007 [2024-11-18 14:27:07.011515] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:15.007 [2024-11-18 14:27:07.012165] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:15.007 [2024-11-18 14:27:07.012456] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:15.007 [2024-11-18 14:27:07.012897] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:15.007 00:25:15.007 [2024-11-18 14:27:07.013197] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:15.266 00:25:15.266 real 0m0.721s 00:25:15.266 user 0m0.418s 00:25:15.266 sys 0m0.201s 00:25:15.266 14:27:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:15.266 ************************************ 00:25:15.266 END TEST bdev_hello_world 00:25:15.266 ************************************ 00:25:15.266 14:27:07 -- common/autotest_common.sh@10 -- # set +x 00:25:15.266 14:27:07 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:25:15.266 14:27:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.266 14:27:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.266 14:27:07 -- common/autotest_common.sh@10 -- # set +x 00:25:15.266 ************************************ 00:25:15.266 START TEST bdev_bounds 00:25:15.266 ************************************ 00:25:15.266 14:27:07 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:25:15.266 14:27:07 -- bdev/blockdev.sh@288 -- # bdevio_pid=145566 00:25:15.266 14:27:07 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:15.266 14:27:07 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:15.266 Process bdevio pid: 145566 00:25:15.266 14:27:07 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 145566' 00:25:15.266 14:27:07 -- bdev/blockdev.sh@291 -- # waitforlisten 145566 00:25:15.266 14:27:07 -- common/autotest_common.sh@829 -- # '[' -z 145566 ']' 00:25:15.266 14:27:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.266 14:27:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.266 14:27:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.267 14:27:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.267 14:27:07 -- common/autotest_common.sh@10 -- # set +x 00:25:15.526 [2024-11-18 14:27:07.340760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:15.526 [2024-11-18 14:27:07.340972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145566 ] 00:25:15.526 [2024-11-18 14:27:07.496151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:15.526 [2024-11-18 14:27:07.559277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.526 [2024-11-18 14:27:07.559409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.526 [2024-11-18 14:27:07.560065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.464 14:27:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.464 14:27:08 -- common/autotest_common.sh@862 -- # return 0 00:25:16.464 14:27:08 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:16.464 I/O targets: 00:25:16.464 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:25:16.464 00:25:16.464 00:25:16.464 CUnit - A unit testing framework for C - Version 2.1-3 00:25:16.464 http://cunit.sourceforge.net/ 00:25:16.464 00:25:16.464 00:25:16.464 Suite: bdevio tests on: Nvme0n1 00:25:16.464 Test: blockdev write read block ...passed 00:25:16.464 Test: blockdev write zeroes read block ...passed 00:25:16.464 Test: blockdev write zeroes read no split ...passed 00:25:16.464 Test: blockdev write zeroes read split ...passed 00:25:16.464 Test: blockdev write zeroes read split partial ...passed 00:25:16.464 Test: blockdev reset ...[2024-11-18 14:27:08.427542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:16.464 [2024-11-18 14:27:08.430209] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:16.464 passed 00:25:16.464 Test: blockdev write read 8 blocks ...passed 00:25:16.464 Test: blockdev write read size > 128k ...passed 00:25:16.464 Test: blockdev write read invalid size ...passed 00:25:16.464 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:16.464 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:16.464 Test: blockdev write read max offset ...passed 00:25:16.464 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:16.464 Test: blockdev writev readv 8 blocks ...passed 00:25:16.464 Test: blockdev writev readv 30 x 1block ...passed 00:25:16.464 Test: blockdev writev readv block ...passed 00:25:16.464 Test: blockdev writev readv size > 128k ...passed 00:25:16.464 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:16.464 Test: blockdev comparev and writev ...[2024-11-18 14:27:08.437091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3720d000 len:0x1000 00:25:16.464 [2024-11-18 14:27:08.437179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:16.464 passed 00:25:16.464 Test: blockdev nvme passthru rw ...passed 00:25:16.464 Test: blockdev nvme passthru vendor specific ...[2024-11-18 14:27:08.438101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:25:16.464 passed 00:25:16.464 Test: blockdev nvme admin passthru ...[2024-11-18 14:27:08.438158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:25:16.464 passed 00:25:16.464 Test: blockdev copy ...passed 00:25:16.464 00:25:16.464 Run Summary: Type Total Ran Passed Failed Inactive 00:25:16.464 suites 1 1 n/a 0 0 00:25:16.464 tests 23 23 23 0 0 00:25:16.464 asserts 152 152 152 0 n/a 00:25:16.464 00:25:16.464 Elapsed time = 0.057 seconds 00:25:16.464 0 00:25:16.464 14:27:08 -- bdev/blockdev.sh@293 -- # killprocess 145566 00:25:16.464 14:27:08 -- common/autotest_common.sh@936 -- # '[' -z 145566 ']' 00:25:16.464 14:27:08 -- common/autotest_common.sh@940 -- # kill -0 145566 00:25:16.464 14:27:08 -- common/autotest_common.sh@941 -- # uname 00:25:16.464 14:27:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:16.464 14:27:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145566 00:25:16.464 14:27:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:16.464 14:27:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:16.464 14:27:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145566' 00:25:16.464 killing process with pid 145566 00:25:16.464 14:27:08 -- common/autotest_common.sh@955 -- # kill 145566 00:25:16.464 14:27:08 -- common/autotest_common.sh@960 -- # wait 145566 00:25:16.722 14:27:08 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:25:16.722 00:25:16.722 real 0m1.488s 00:25:16.722 user 0m3.842s 00:25:16.722 sys 0m0.300s 00:25:16.722 14:27:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:16.722 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:16.722 ************************************ 00:25:16.722 END TEST bdev_bounds 00:25:16.722 ************************************ 00:25:16.981 14:27:08 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:25:16.981 14:27:08 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:16.981 14:27:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:16.981 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:16.981 ************************************ 00:25:16.981 START TEST bdev_nbd 00:25:16.981 ************************************ 00:25:16.981 14:27:08 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:25:16.981 14:27:08 -- bdev/blockdev.sh@298 -- # uname -s 00:25:16.981 14:27:08 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:25:16.981 14:27:08 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:16.981 14:27:08 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:16.981 14:27:08 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:25:16.981 14:27:08 -- bdev/blockdev.sh@302 -- # local bdev_all 00:25:16.981 14:27:08 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:25:16.981 14:27:08 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:25:16.981 14:27:08 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:16.981 14:27:08 -- bdev/blockdev.sh@309 -- # local nbd_all 00:25:16.981 14:27:08 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:25:16.981 14:27:08 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:25:16.981 14:27:08 -- bdev/blockdev.sh@312 -- # local nbd_list 00:25:16.981 14:27:08 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:25:16.981 14:27:08 -- bdev/blockdev.sh@313 -- # local bdev_list 00:25:16.981 14:27:08 -- bdev/blockdev.sh@316 -- # nbd_pid=145618 00:25:16.981 14:27:08 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:16.981 14:27:08 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:16.981 14:27:08 -- bdev/blockdev.sh@318 -- # waitforlisten 145618 /var/tmp/spdk-nbd.sock 00:25:16.981 14:27:08 -- common/autotest_common.sh@829 -- # '[' -z 145618 ']' 00:25:16.981 14:27:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:16.982 14:27:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:16.982 14:27:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:16.982 14:27:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.982 14:27:08 -- common/autotest_common.sh@10 -- # set +x 00:25:16.982 [2024-11-18 14:27:08.901488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:16.982 [2024-11-18 14:27:08.901751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.982 [2024-11-18 14:27:09.049161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.239 [2024-11-18 14:27:09.118507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.873 14:27:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.873 14:27:09 -- common/autotest_common.sh@862 -- # return 0 00:25:17.873 14:27:09 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@24 -- # local i 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:17.873 14:27:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:25:18.153 14:27:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:18.153 14:27:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:18.153 14:27:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:18.153 14:27:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:18.153 14:27:10 -- common/autotest_common.sh@867 -- # local i 00:25:18.153 14:27:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:18.153 14:27:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:18.153 14:27:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:18.153 14:27:10 -- common/autotest_common.sh@871 -- # break 00:25:18.153 14:27:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:18.153 14:27:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:18.153 14:27:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:18.153 1+0 records in 00:25:18.153 1+0 records out 00:25:18.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434221 s, 9.4 MB/s 00:25:18.153 14:27:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:18.154 14:27:10 -- common/autotest_common.sh@884 -- # size=4096 00:25:18.154 14:27:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:18.154 14:27:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:18.154 14:27:10 -- common/autotest_common.sh@887 -- # return 0 00:25:18.154 14:27:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:18.154 14:27:10 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:18.154 14:27:10 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:18.412 { 00:25:18.412 "nbd_device": "/dev/nbd0", 00:25:18.412 "bdev_name": "Nvme0n1" 00:25:18.412 } 00:25:18.412 ]' 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:18.412 { 00:25:18.412 "nbd_device": "/dev/nbd0", 00:25:18.412 "bdev_name": "Nvme0n1" 00:25:18.412 } 00:25:18.412 ]' 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@51 -- # local i 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:18.412 14:27:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@41 -- # break 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@45 -- # return 0 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.671 14:27:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@65 -- # true 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@65 -- # count=0 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@122 -- # count=0 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@127 -- # return 0 00:25:18.930 14:27:10 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@12 -- # local i 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:18.930 14:27:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:25:19.189 /dev/nbd0 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:19.189 14:27:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:19.189 14:27:11 -- common/autotest_common.sh@867 -- # local i 00:25:19.189 14:27:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:19.189 14:27:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:19.189 14:27:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:19.189 14:27:11 -- common/autotest_common.sh@871 -- # break 00:25:19.189 14:27:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:19.189 14:27:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:19.189 14:27:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:19.189 1+0 records in 00:25:19.189 1+0 records out 00:25:19.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00092643 s, 4.4 MB/s 00:25:19.189 14:27:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:19.189 14:27:11 -- common/autotest_common.sh@884 -- # size=4096 00:25:19.189 14:27:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:19.189 14:27:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:19.189 14:27:11 -- common/autotest_common.sh@887 -- # return 0 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:19.189 14:27:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:19.448 { 00:25:19.448 "nbd_device": "/dev/nbd0", 00:25:19.448 "bdev_name": "Nvme0n1" 00:25:19.448 } 00:25:19.448 ]' 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:19.448 { 00:25:19.448 "nbd_device": "/dev/nbd0", 00:25:19.448 "bdev_name": "Nvme0n1" 00:25:19.448 } 00:25:19.448 ]' 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@65 -- # count=1 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@66 -- # echo 1 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@95 -- # count=1 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:19.448 14:27:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:19.707 256+0 records in 00:25:19.707 256+0 records out 00:25:19.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00765222 s, 137 MB/s 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:19.707 256+0 records in 00:25:19.707 256+0 records out 00:25:19.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0648945 s, 16.2 MB/s 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@51 -- # local i 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:19.707 14:27:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@41 -- # break 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@45 -- # return 0 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:19.967 14:27:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@65 -- # true 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@65 -- # count=0 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@104 -- # count=0 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@109 -- # return 0 00:25:20.225 14:27:12 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:20.225 14:27:12 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:20.484 malloc_lvol_verify 00:25:20.484 14:27:12 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:20.743 7a9a0d24-de0e-4ceb-a3c6-72b8a393ae2e 00:25:20.743 14:27:12 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:21.002 1f29e966-75f4-4798-8007-95b48e821c38 00:25:21.002 14:27:12 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:21.002 /dev/nbd0 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:21.002 mke2fs 1.46.5 (30-Dec-2021) 00:25:21.002 00:25:21.002 Filesystem too small for a journal 00:25:21.002 Discarding device blocks: 0/1024 done 00:25:21.002 Creating filesystem with 1024 4k blocks and 1024 inodes 00:25:21.002 00:25:21.002 Allocating group tables: 0/1 done 00:25:21.002 Writing inode tables: 0/1 done 00:25:21.002 Writing superblocks and filesystem accounting information: 0/1 done 00:25:21.002 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@51 -- # local i 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:21.002 14:27:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@41 -- # break 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@45 -- # return 0 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:21.261 14:27:13 -- bdev/nbd_common.sh@147 -- # return 0 00:25:21.261 14:27:13 -- bdev/blockdev.sh@324 -- # killprocess 145618 00:25:21.261 14:27:13 -- common/autotest_common.sh@936 -- # '[' -z 145618 ']' 00:25:21.261 14:27:13 -- common/autotest_common.sh@940 -- # kill -0 145618 00:25:21.261 14:27:13 -- common/autotest_common.sh@941 -- # uname 00:25:21.261 14:27:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:21.261 14:27:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145618 00:25:21.519 14:27:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:21.519 14:27:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:21.519 killing process with pid 145618 00:25:21.519 14:27:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145618' 00:25:21.519 14:27:13 -- common/autotest_common.sh@955 -- # kill 145618 00:25:21.519 14:27:13 -- common/autotest_common.sh@960 -- # wait 145618 00:25:21.779 14:27:13 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:25:21.779 00:25:21.779 real 0m4.838s 00:25:21.779 user 0m7.271s 00:25:21.779 sys 0m1.188s 00:25:21.779 14:27:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:21.779 14:27:13 -- common/autotest_common.sh@10 -- # set +x 00:25:21.779 ************************************ 00:25:21.779 END TEST bdev_nbd 00:25:21.779 ************************************ 00:25:21.779 14:27:13 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:25:21.779 14:27:13 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:25:21.779 skipping fio tests on NVMe due to multi-ns failures. 00:25:21.779 14:27:13 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:25:21.779 14:27:13 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:21.779 14:27:13 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:21.779 14:27:13 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:21.779 14:27:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:21.779 14:27:13 -- common/autotest_common.sh@10 -- # set +x 00:25:21.779 ************************************ 00:25:21.779 START TEST bdev_verify 00:25:21.779 ************************************ 00:25:21.779 14:27:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:21.779 [2024-11-18 14:27:13.786048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:21.779 [2024-11-18 14:27:13.786276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145805 ] 00:25:22.039 [2024-11-18 14:27:13.937078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.039 [2024-11-18 14:27:14.018353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.039 [2024-11-18 14:27:14.018369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.297 Running I/O for 5 seconds... 00:25:27.567 00:25:27.567 Latency(us) 00:25:27.567 [2024-11-18T14:27:19.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.567 [2024-11-18T14:27:19.641Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.567 Verification LBA range: start 0x0 length 0xa0000 00:25:27.567 Nvme0n1 : 5.01 14435.58 56.39 0.00 0.00 8832.28 402.15 17992.61 00:25:27.567 [2024-11-18T14:27:19.641Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:27.567 Verification LBA range: start 0xa0000 length 0xa0000 00:25:27.567 Nvme0n1 : 5.01 14248.10 55.66 0.00 0.00 8947.73 506.41 16920.20 00:25:27.567 [2024-11-18T14:27:19.641Z] =================================================================================================================== 00:25:27.567 [2024-11-18T14:27:19.641Z] Total : 28683.68 112.05 0.00 0.00 8889.63 402.15 17992.61 00:25:34.134 00:25:34.134 real 0m11.876s 00:25:34.134 user 0m16.333s 00:25:34.134 sys 0m6.913s 00:25:34.134 14:27:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:34.134 14:27:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.134 ************************************ 00:25:34.134 END TEST bdev_verify 00:25:34.134 ************************************ 00:25:34.134 14:27:25 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:34.134 14:27:25 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:34.134 14:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:34.134 14:27:25 -- common/autotest_common.sh@10 -- # set +x 00:25:34.134 ************************************ 00:25:34.134 START TEST bdev_verify_big_io 00:25:34.134 ************************************ 00:25:34.134 14:27:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:34.134 [2024-11-18 14:27:25.700350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:34.134 [2024-11-18 14:27:25.701254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145921 ] 00:25:34.134 [2024-11-18 14:27:25.840801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:34.134 [2024-11-18 14:27:25.901575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.134 [2024-11-18 14:27:25.901582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.134 Running I/O for 5 seconds... 00:25:39.405 00:25:39.405 Latency(us) 00:25:39.405 [2024-11-18T14:27:31.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.405 [2024-11-18T14:27:31.479Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:39.405 Verification LBA range: start 0x0 length 0xa000 00:25:39.405 Nvme0n1 : 5.03 2056.64 128.54 0.00 0.00 61395.25 606.95 92465.34 00:25:39.405 [2024-11-18T14:27:31.479Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:39.405 Verification LBA range: start 0xa000 length 0xa000 00:25:39.405 Nvme0n1 : 5.03 2917.90 182.37 0.00 0.00 43363.18 703.77 60531.43 00:25:39.405 [2024-11-18T14:27:31.479Z] =================================================================================================================== 00:25:39.405 [2024-11-18T14:27:31.479Z] Total : 4974.54 310.91 0.00 0.00 50818.98 606.95 92465.34 00:25:39.973 00:25:39.973 real 0m6.172s 00:25:39.973 user 0m11.494s 00:25:39.973 sys 0m0.198s 00:25:39.973 14:27:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:39.973 14:27:31 -- common/autotest_common.sh@10 -- # set +x 00:25:39.973 ************************************ 00:25:39.973 END TEST bdev_verify_big_io 00:25:39.973 ************************************ 00:25:39.973 14:27:31 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:39.974 14:27:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:39.974 14:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:39.974 14:27:31 -- common/autotest_common.sh@10 -- # set +x 00:25:39.974 ************************************ 00:25:39.974 START TEST bdev_write_zeroes 00:25:39.974 ************************************ 00:25:39.974 14:27:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:39.974 [2024-11-18 14:27:31.936403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:39.974 [2024-11-18 14:27:31.936601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146017 ] 00:25:40.234 [2024-11-18 14:27:32.075687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.234 [2024-11-18 14:27:32.141073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.492 Running I/O for 1 seconds... 00:25:41.425 00:25:41.425 Latency(us) 00:25:41.425 [2024-11-18T14:27:33.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.425 [2024-11-18T14:27:33.499Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:41.425 Nvme0n1 : 1.00 74469.36 290.90 0.00 0.00 1714.31 573.44 14477.50 00:25:41.425 [2024-11-18T14:27:33.499Z] =================================================================================================================== 00:25:41.425 [2024-11-18T14:27:33.499Z] Total : 74469.36 290.90 0.00 0.00 1714.31 573.44 14477.50 00:25:41.683 00:25:41.683 real 0m1.688s 00:25:41.683 user 0m1.395s 00:25:41.683 sys 0m0.193s 00:25:41.683 14:27:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:41.683 14:27:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.684 ************************************ 00:25:41.684 END TEST bdev_write_zeroes 00:25:41.684 ************************************ 00:25:41.684 14:27:33 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.684 14:27:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:41.684 14:27:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:41.684 14:27:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.684 ************************************ 00:25:41.684 START TEST bdev_json_nonenclosed 00:25:41.684 ************************************ 00:25:41.684 14:27:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.684 [2024-11-18 14:27:33.661478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:41.684 [2024-11-18 14:27:33.661794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146055 ] 00:25:41.942 [2024-11-18 14:27:33.799924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.942 [2024-11-18 14:27:33.860085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.942 [2024-11-18 14:27:33.860603] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:41.942 [2024-11-18 14:27:33.860799] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:41.942 00:25:41.942 real 0m0.327s 00:25:41.942 user 0m0.125s 00:25:41.942 sys 0m0.101s 00:25:41.942 14:27:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:41.942 14:27:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.942 ************************************ 00:25:41.942 END TEST bdev_json_nonenclosed 00:25:41.942 ************************************ 00:25:41.942 14:27:33 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:41.942 14:27:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:25:41.942 14:27:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:41.942 14:27:33 -- common/autotest_common.sh@10 -- # set +x 00:25:41.942 ************************************ 00:25:41.942 START TEST bdev_json_nonarray 00:25:41.942 ************************************ 00:25:41.942 14:27:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:42.201 [2024-11-18 14:27:34.033080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:42.201 [2024-11-18 14:27:34.033308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146086 ] 00:25:42.201 [2024-11-18 14:27:34.171450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.201 [2024-11-18 14:27:34.232428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.201 [2024-11-18 14:27:34.232895] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:42.201 [2024-11-18 14:27:34.233052] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:42.459 00:25:42.459 real 0m0.331s 00:25:42.459 user 0m0.156s 00:25:42.459 sys 0m0.075s 00:25:42.459 14:27:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:42.459 14:27:34 -- common/autotest_common.sh@10 -- # set +x 00:25:42.459 ************************************ 00:25:42.459 END TEST bdev_json_nonarray 00:25:42.459 ************************************ 00:25:42.459 14:27:34 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:25:42.459 14:27:34 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:25:42.459 14:27:34 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:25:42.459 14:27:34 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:25:42.459 14:27:34 -- bdev/blockdev.sh@809 -- # cleanup 00:25:42.459 14:27:34 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:42.459 14:27:34 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:42.459 14:27:34 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:25:42.459 14:27:34 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:25:42.459 14:27:34 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:25:42.459 14:27:34 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:25:42.459 00:25:42.459 real 0m29.768s 00:25:42.459 user 0m43.225s 00:25:42.459 sys 0m9.919s 00:25:42.459 14:27:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:42.459 14:27:34 -- common/autotest_common.sh@10 -- # set +x 00:25:42.459 ************************************ 00:25:42.459 END TEST blockdev_nvme 00:25:42.459 ************************************ 00:25:42.459 14:27:34 -- spdk/autotest.sh@206 -- # uname -s 00:25:42.459 14:27:34 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:25:42.459 14:27:34 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:25:42.459 14:27:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:42.459 14:27:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:42.459 14:27:34 -- common/autotest_common.sh@10 -- # set +x 00:25:42.459 ************************************ 00:25:42.459 START TEST blockdev_nvme_gpt 00:25:42.459 ************************************ 00:25:42.459 14:27:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:25:42.459 * Looking for test storage... 00:25:42.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:42.459 14:27:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:42.459 14:27:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:42.459 14:27:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:42.718 14:27:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:42.718 14:27:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:42.718 14:27:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:42.718 14:27:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:42.718 14:27:34 -- scripts/common.sh@335 -- # IFS=.-: 00:25:42.719 14:27:34 -- scripts/common.sh@335 -- # read -ra ver1 00:25:42.719 14:27:34 -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.719 14:27:34 -- scripts/common.sh@336 -- # read -ra ver2 00:25:42.719 14:27:34 -- scripts/common.sh@337 -- # local 'op=<' 00:25:42.719 14:27:34 -- scripts/common.sh@339 -- # ver1_l=2 00:25:42.719 14:27:34 -- scripts/common.sh@340 -- # ver2_l=1 00:25:42.719 14:27:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:42.719 14:27:34 -- scripts/common.sh@343 -- # case "$op" in 00:25:42.719 14:27:34 -- scripts/common.sh@344 -- # : 1 00:25:42.719 14:27:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:42.719 14:27:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.719 14:27:34 -- scripts/common.sh@364 -- # decimal 1 00:25:42.719 14:27:34 -- scripts/common.sh@352 -- # local d=1 00:25:42.719 14:27:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.719 14:27:34 -- scripts/common.sh@354 -- # echo 1 00:25:42.719 14:27:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:42.719 14:27:34 -- scripts/common.sh@365 -- # decimal 2 00:25:42.719 14:27:34 -- scripts/common.sh@352 -- # local d=2 00:25:42.719 14:27:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.719 14:27:34 -- scripts/common.sh@354 -- # echo 2 00:25:42.719 14:27:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:42.719 14:27:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:42.719 14:27:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:42.719 14:27:34 -- scripts/common.sh@367 -- # return 0 00:25:42.719 14:27:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.719 14:27:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:42.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.719 --rc genhtml_branch_coverage=1 00:25:42.719 --rc genhtml_function_coverage=1 00:25:42.719 --rc genhtml_legend=1 00:25:42.719 --rc geninfo_all_blocks=1 00:25:42.719 --rc geninfo_unexecuted_blocks=1 00:25:42.719 00:25:42.719 ' 00:25:42.719 14:27:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:42.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.719 --rc genhtml_branch_coverage=1 00:25:42.719 --rc genhtml_function_coverage=1 00:25:42.719 --rc genhtml_legend=1 00:25:42.719 --rc geninfo_all_blocks=1 00:25:42.719 --rc geninfo_unexecuted_blocks=1 00:25:42.719 00:25:42.719 ' 00:25:42.719 14:27:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:42.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.719 --rc genhtml_branch_coverage=1 00:25:42.719 --rc genhtml_function_coverage=1 00:25:42.719 --rc genhtml_legend=1 00:25:42.719 --rc geninfo_all_blocks=1 00:25:42.719 --rc geninfo_unexecuted_blocks=1 00:25:42.719 00:25:42.719 ' 00:25:42.719 14:27:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:42.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.719 --rc genhtml_branch_coverage=1 00:25:42.719 --rc genhtml_function_coverage=1 00:25:42.719 --rc genhtml_legend=1 00:25:42.719 --rc geninfo_all_blocks=1 00:25:42.719 --rc geninfo_unexecuted_blocks=1 00:25:42.719 00:25:42.719 ' 00:25:42.719 14:27:34 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:42.719 14:27:34 -- bdev/nbd_common.sh@6 -- # set -e 00:25:42.719 14:27:34 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:42.719 14:27:34 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:42.719 14:27:34 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:42.719 14:27:34 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:42.719 14:27:34 -- bdev/blockdev.sh@18 -- # : 00:25:42.719 14:27:34 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:25:42.719 14:27:34 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:25:42.719 14:27:34 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:25:42.719 14:27:34 -- bdev/blockdev.sh@672 -- # uname -s 00:25:42.719 14:27:34 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:25:42.719 14:27:34 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:25:42.719 14:27:34 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:25:42.719 14:27:34 -- bdev/blockdev.sh@681 -- # crypto_device= 00:25:42.719 14:27:34 -- bdev/blockdev.sh@682 -- # dek= 00:25:42.719 14:27:34 -- bdev/blockdev.sh@683 -- # env_ctx= 00:25:42.719 14:27:34 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:25:42.719 14:27:34 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:25:42.719 14:27:34 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:25:42.719 14:27:34 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:25:42.719 14:27:34 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:25:42.719 14:27:34 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146168 00:25:42.719 14:27:34 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:42.719 14:27:34 -- bdev/blockdev.sh@47 -- # waitforlisten 146168 00:25:42.719 14:27:34 -- common/autotest_common.sh@829 -- # '[' -z 146168 ']' 00:25:42.719 14:27:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.719 14:27:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.719 14:27:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.719 14:27:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.719 14:27:34 -- common/autotest_common.sh@10 -- # set +x 00:25:42.719 14:27:34 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:42.719 [2024-11-18 14:27:34.605093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:42.719 [2024-11-18 14:27:34.605547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146168 ] 00:25:42.719 [2024-11-18 14:27:34.744196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.978 [2024-11-18 14:27:34.799323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:42.978 [2024-11-18 14:27:34.799790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.545 14:27:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.545 14:27:35 -- common/autotest_common.sh@862 -- # return 0 00:25:43.545 14:27:35 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:25:43.545 14:27:35 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:25:43.545 14:27:35 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:43.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:43.804 Waiting for block devices as requested 00:25:43.804 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:25:43.804 14:27:35 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:25:43.804 14:27:35 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:25:43.804 14:27:35 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:25:43.804 14:27:35 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:25:43.804 14:27:35 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:25:43.804 14:27:35 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:25:43.804 14:27:35 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:25:43.804 14:27:35 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:43.804 14:27:35 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:25:43.804 14:27:35 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:25:43.804 14:27:35 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:25:43.804 14:27:35 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:25:43.804 14:27:35 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:25:43.804 14:27:35 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:25:43.804 14:27:35 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:25:43.804 14:27:35 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:25:43.804 14:27:35 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:25:43.804 BYT; 00:25:43.804 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:25:43.804 14:27:35 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:25:43.804 BYT; 00:25:43.804 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:25:43.804 14:27:35 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:25:43.804 14:27:35 -- bdev/blockdev.sh@114 -- # break 00:25:43.804 14:27:35 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:25:43.804 14:27:35 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:25:43.804 14:27:35 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:25:43.804 14:27:35 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:25:44.372 14:27:36 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:25:44.372 14:27:36 -- scripts/common.sh@410 -- # local spdk_guid 00:25:44.372 14:27:36 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:25:44.372 14:27:36 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:44.372 14:27:36 -- scripts/common.sh@415 -- # IFS='()' 00:25:44.372 14:27:36 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:25:44.372 14:27:36 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:44.372 14:27:36 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:25:44.372 14:27:36 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:44.372 14:27:36 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:44.372 14:27:36 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:25:44.372 14:27:36 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:25:44.372 14:27:36 -- scripts/common.sh@422 -- # local spdk_guid 00:25:44.372 14:27:36 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:25:44.372 14:27:36 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:44.372 14:27:36 -- scripts/common.sh@427 -- # IFS='()' 00:25:44.372 14:27:36 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:25:44.372 14:27:36 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:25:44.372 14:27:36 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:25:44.372 14:27:36 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:44.372 14:27:36 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:44.372 14:27:36 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:25:44.372 14:27:36 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:25:45.309 The operation has completed successfully. 00:25:45.309 14:27:37 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:25:46.246 The operation has completed successfully. 00:25:46.246 14:27:38 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:46.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:25:46.814 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:25:47.752 14:27:39 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 [] 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:25:47.752 14:27:39 -- bdev/blockdev.sh@79 -- # local json 00:25:47.752 14:27:39 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:25:47.752 14:27:39 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:47.752 14:27:39 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@738 -- # cat 00:25:47.752 14:27:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:25:47.752 14:27:39 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:25:47.752 14:27:39 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:25:47.752 14:27:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.752 14:27:39 -- common/autotest_common.sh@10 -- # set +x 00:25:47.752 14:27:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.752 14:27:39 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:25:47.752 14:27:39 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:25:47.752 14:27:39 -- bdev/blockdev.sh@747 -- # jq -r .name 00:25:47.752 14:27:39 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:25:47.752 14:27:39 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:25:47.752 14:27:39 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:25:47.752 14:27:39 -- bdev/blockdev.sh@752 -- # killprocess 146168 00:25:47.752 14:27:39 -- common/autotest_common.sh@936 -- # '[' -z 146168 ']' 00:25:47.752 14:27:39 -- common/autotest_common.sh@940 -- # kill -0 146168 00:25:47.752 14:27:39 -- common/autotest_common.sh@941 -- # uname 00:25:47.752 14:27:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:47.752 14:27:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146168 00:25:48.011 14:27:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.011 killing process with pid 146168 00:25:48.011 14:27:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.011 14:27:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146168' 00:25:48.011 14:27:39 -- common/autotest_common.sh@955 -- # kill 146168 00:25:48.011 14:27:39 -- common/autotest_common.sh@960 -- # wait 146168 00:25:48.579 14:27:40 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:48.579 14:27:40 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:25:48.579 14:27:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:25:48.579 14:27:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.579 14:27:40 -- common/autotest_common.sh@10 -- # set +x 00:25:48.579 ************************************ 00:25:48.579 START TEST bdev_hello_world 00:25:48.579 ************************************ 00:25:48.579 14:27:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:25:48.579 [2024-11-18 14:27:40.463848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:48.579 [2024-11-18 14:27:40.464080] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146586 ] 00:25:48.579 [2024-11-18 14:27:40.609681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.837 [2024-11-18 14:27:40.692934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.096 [2024-11-18 14:27:40.930304] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:49.096 [2024-11-18 14:27:40.930985] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:25:49.096 [2024-11-18 14:27:40.931339] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:49.096 [2024-11-18 14:27:40.933889] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:49.096 [2024-11-18 14:27:40.934658] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:49.096 [2024-11-18 14:27:40.934983] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:49.096 [2024-11-18 14:27:40.935485] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:49.096 00:25:49.096 [2024-11-18 14:27:40.935779] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:49.355 ************************************ 00:25:49.355 END TEST bdev_hello_world 00:25:49.355 ************************************ 00:25:49.355 00:25:49.355 real 0m0.845s 00:25:49.355 user 0m0.508s 00:25:49.355 sys 0m0.235s 00:25:49.355 14:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:49.355 14:27:41 -- common/autotest_common.sh@10 -- # set +x 00:25:49.355 14:27:41 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:25:49.355 14:27:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:49.355 14:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.355 14:27:41 -- common/autotest_common.sh@10 -- # set +x 00:25:49.355 ************************************ 00:25:49.355 START TEST bdev_bounds 00:25:49.355 ************************************ 00:25:49.355 14:27:41 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:25:49.355 14:27:41 -- bdev/blockdev.sh@288 -- # bdevio_pid=146617 00:25:49.355 14:27:41 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:49.355 14:27:41 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:49.355 14:27:41 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 146617' 00:25:49.355 Process bdevio pid: 146617 00:25:49.355 14:27:41 -- bdev/blockdev.sh@291 -- # waitforlisten 146617 00:25:49.355 14:27:41 -- common/autotest_common.sh@829 -- # '[' -z 146617 ']' 00:25:49.355 14:27:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.355 14:27:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.355 14:27:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.355 14:27:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.355 14:27:41 -- common/autotest_common.sh@10 -- # set +x 00:25:49.355 [2024-11-18 14:27:41.375740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:49.355 [2024-11-18 14:27:41.375996] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146617 ] 00:25:49.614 [2024-11-18 14:27:41.532300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.614 [2024-11-18 14:27:41.605738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.614 [2024-11-18 14:27:41.605868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.614 [2024-11-18 14:27:41.606684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.551 14:27:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.551 14:27:42 -- common/autotest_common.sh@862 -- # return 0 00:25:50.551 14:27:42 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:50.551 I/O targets: 00:25:50.551 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:25:50.551 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:25:50.551 00:25:50.551 00:25:50.551 CUnit - A unit testing framework for C - Version 2.1-3 00:25:50.551 http://cunit.sourceforge.net/ 00:25:50.551 00:25:50.551 00:25:50.551 Suite: bdevio tests on: Nvme0n1p2 00:25:50.551 Test: blockdev write read block ...passed 00:25:50.551 Test: blockdev write zeroes read block ...passed 00:25:50.551 Test: blockdev write zeroes read no split ...passed 00:25:50.551 Test: blockdev write zeroes read split ...passed 00:25:50.551 Test: blockdev write zeroes read split partial ...passed 00:25:50.551 Test: blockdev reset ...[2024-11-18 14:27:42.400386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:50.551 [2024-11-18 14:27:42.403469] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:50.551 passed 00:25:50.551 Test: blockdev write read 8 blocks ...passed 00:25:50.551 Test: blockdev write read size > 128k ...passed 00:25:50.551 Test: blockdev write read invalid size ...passed 00:25:50.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:50.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:50.551 Test: blockdev write read max offset ...passed 00:25:50.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:50.551 Test: blockdev writev readv 8 blocks ...passed 00:25:50.551 Test: blockdev writev readv 30 x 1block ...passed 00:25:50.551 Test: blockdev writev readv block ...passed 00:25:50.552 Test: blockdev writev readv size > 128k ...passed 00:25:50.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:50.552 Test: blockdev comparev and writev ...[2024-11-18 14:27:42.411346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x38e0b000 len:0x1000 00:25:50.552 [2024-11-18 14:27:42.411421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:50.552 passed 00:25:50.552 Test: blockdev nvme passthru rw ...passed 00:25:50.552 Test: blockdev nvme passthru vendor specific ...passed 00:25:50.552 Test: blockdev nvme admin passthru ...passed 00:25:50.552 Test: blockdev copy ...passed 00:25:50.552 Suite: bdevio tests on: Nvme0n1p1 00:25:50.552 Test: blockdev write read block ...passed 00:25:50.552 Test: blockdev write zeroes read block ...passed 00:25:50.552 Test: blockdev write zeroes read no split ...passed 00:25:50.552 Test: blockdev write zeroes read split ...passed 00:25:50.552 Test: blockdev write zeroes read split partial ...passed 00:25:50.552 Test: blockdev reset ...[2024-11-18 14:27:42.425591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:25:50.552 [2024-11-18 14:27:42.427783] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:50.552 passed 00:25:50.552 Test: blockdev write read 8 blocks ...passed 00:25:50.552 Test: blockdev write read size > 128k ...passed 00:25:50.552 Test: blockdev write read invalid size ...passed 00:25:50.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:50.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:50.552 Test: blockdev write read max offset ...passed 00:25:50.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:50.552 Test: blockdev writev readv 8 blocks ...passed 00:25:50.552 Test: blockdev writev readv 30 x 1block ...passed 00:25:50.552 Test: blockdev writev readv block ...passed 00:25:50.552 Test: blockdev writev readv size > 128k ...passed 00:25:50.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:50.552 Test: blockdev comparev and writev ...[2024-11-18 14:27:42.435437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x38e0d000 len:0x1000 00:25:50.552 [2024-11-18 14:27:42.435494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:25:50.552 passed 00:25:50.552 Test: blockdev nvme passthru rw ...passed 00:25:50.552 Test: blockdev nvme passthru vendor specific ...passed 00:25:50.552 Test: blockdev nvme admin passthru ...passed 00:25:50.552 Test: blockdev copy ...passed 00:25:50.552 00:25:50.552 Run Summary: Type Total Ran Passed Failed Inactive 00:25:50.552 suites 2 2 n/a 0 0 00:25:50.552 tests 46 46 46 0 0 00:25:50.552 asserts 284 284 284 0 n/a 00:25:50.552 00:25:50.552 Elapsed time = 0.114 seconds 00:25:50.552 0 00:25:50.552 14:27:42 -- bdev/blockdev.sh@293 -- # killprocess 146617 00:25:50.552 14:27:42 -- common/autotest_common.sh@936 -- # '[' -z 146617 ']' 00:25:50.552 14:27:42 -- common/autotest_common.sh@940 -- # kill -0 146617 00:25:50.552 14:27:42 -- common/autotest_common.sh@941 -- # uname 00:25:50.552 14:27:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:50.552 14:27:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146617 00:25:50.552 14:27:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:50.552 14:27:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:50.552 killing process with pid 146617 00:25:50.552 14:27:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146617' 00:25:50.552 14:27:42 -- common/autotest_common.sh@955 -- # kill 146617 00:25:50.552 14:27:42 -- common/autotest_common.sh@960 -- # wait 146617 00:25:50.811 14:27:42 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:25:50.811 00:25:50.811 real 0m1.415s 00:25:50.811 user 0m3.543s 00:25:50.811 sys 0m0.312s 00:25:50.811 14:27:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:50.811 14:27:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.811 ************************************ 00:25:50.811 END TEST bdev_bounds 00:25:50.811 ************************************ 00:25:50.811 14:27:42 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:25:50.811 14:27:42 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:50.811 14:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.811 14:27:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.811 ************************************ 00:25:50.811 START TEST bdev_nbd 00:25:50.811 ************************************ 00:25:50.811 14:27:42 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:25:50.811 14:27:42 -- bdev/blockdev.sh@298 -- # uname -s 00:25:50.811 14:27:42 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:25:50.811 14:27:42 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:50.811 14:27:42 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:50.811 14:27:42 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:25:50.811 14:27:42 -- bdev/blockdev.sh@302 -- # local bdev_all 00:25:50.811 14:27:42 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:25:50.811 14:27:42 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:25:50.811 14:27:42 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:50.811 14:27:42 -- bdev/blockdev.sh@309 -- # local nbd_all 00:25:50.811 14:27:42 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:25:50.811 14:27:42 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:50.811 14:27:42 -- bdev/blockdev.sh@312 -- # local nbd_list 00:25:50.811 14:27:42 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:50.811 14:27:42 -- bdev/blockdev.sh@313 -- # local bdev_list 00:25:50.811 14:27:42 -- bdev/blockdev.sh@316 -- # nbd_pid=146673 00:25:50.811 14:27:42 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:50.811 14:27:42 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:50.812 14:27:42 -- bdev/blockdev.sh@318 -- # waitforlisten 146673 /var/tmp/spdk-nbd.sock 00:25:50.812 14:27:42 -- common/autotest_common.sh@829 -- # '[' -z 146673 ']' 00:25:50.812 14:27:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:50.812 14:27:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.812 14:27:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:50.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:50.812 14:27:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.812 14:27:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.812 [2024-11-18 14:27:42.840651] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:50.812 [2024-11-18 14:27:42.840808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.070 [2024-11-18 14:27:42.980619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.070 [2024-11-18 14:27:43.050763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.034 14:27:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.034 14:27:43 -- common/autotest_common.sh@862 -- # return 0 00:25:52.034 14:27:43 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@24 -- # local i 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:52.034 14:27:43 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:52.034 14:27:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:52.034 14:27:44 -- common/autotest_common.sh@867 -- # local i 00:25:52.034 14:27:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:52.034 14:27:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:52.034 14:27:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:52.034 14:27:44 -- common/autotest_common.sh@871 -- # break 00:25:52.034 14:27:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:52.034 14:27:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:52.034 14:27:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.034 1+0 records in 00:25:52.034 1+0 records out 00:25:52.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562485 s, 7.3 MB/s 00:25:52.034 14:27:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.034 14:27:44 -- common/autotest_common.sh@884 -- # size=4096 00:25:52.034 14:27:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.034 14:27:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:52.034 14:27:44 -- common/autotest_common.sh@887 -- # return 0 00:25:52.034 14:27:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:52.034 14:27:44 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:25:52.034 14:27:44 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:25:52.293 14:27:44 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:25:52.293 14:27:44 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:25:52.293 14:27:44 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:25:52.293 14:27:44 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:52.293 14:27:44 -- common/autotest_common.sh@867 -- # local i 00:25:52.293 14:27:44 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:52.293 14:27:44 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:52.293 14:27:44 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:52.293 14:27:44 -- common/autotest_common.sh@871 -- # break 00:25:52.293 14:27:44 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:52.293 14:27:44 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:52.293 14:27:44 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:52.293 1+0 records in 00:25:52.293 1+0 records out 00:25:52.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528877 s, 7.7 MB/s 00:25:52.293 14:27:44 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.293 14:27:44 -- common/autotest_common.sh@884 -- # size=4096 00:25:52.293 14:27:44 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:52.293 14:27:44 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:52.293 14:27:44 -- common/autotest_common.sh@887 -- # return 0 00:25:52.293 14:27:44 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:52.293 14:27:44 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:25:52.293 14:27:44 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:52.552 { 00:25:52.552 "nbd_device": "/dev/nbd0", 00:25:52.552 "bdev_name": "Nvme0n1p1" 00:25:52.552 }, 00:25:52.552 { 00:25:52.552 "nbd_device": "/dev/nbd1", 00:25:52.552 "bdev_name": "Nvme0n1p2" 00:25:52.552 } 00:25:52.552 ]' 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:52.552 { 00:25:52.552 "nbd_device": "/dev/nbd0", 00:25:52.552 "bdev_name": "Nvme0n1p1" 00:25:52.552 }, 00:25:52.552 { 00:25:52.552 "nbd_device": "/dev/nbd1", 00:25:52.552 "bdev_name": "Nvme0n1p2" 00:25:52.552 } 00:25:52.552 ]' 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@51 -- # local i 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.552 14:27:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@41 -- # break 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@45 -- # return 0 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:52.811 14:27:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@41 -- # break 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@45 -- # return 0 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.070 14:27:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@65 -- # true 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@65 -- # count=0 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@122 -- # count=0 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@127 -- # return 0 00:25:53.329 14:27:45 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:53.329 14:27:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:53.330 14:27:45 -- bdev/nbd_common.sh@12 -- # local i 00:25:53.330 14:27:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:53.330 14:27:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:53.330 14:27:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:25:53.589 /dev/nbd0 00:25:53.589 14:27:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:53.589 14:27:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:53.589 14:27:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:25:53.589 14:27:45 -- common/autotest_common.sh@867 -- # local i 00:25:53.589 14:27:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:53.589 14:27:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:53.589 14:27:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:25:53.589 14:27:45 -- common/autotest_common.sh@871 -- # break 00:25:53.589 14:27:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:53.589 14:27:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:53.589 14:27:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:53.589 1+0 records in 00:25:53.589 1+0 records out 00:25:53.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462298 s, 8.9 MB/s 00:25:53.589 14:27:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:53.589 14:27:45 -- common/autotest_common.sh@884 -- # size=4096 00:25:53.589 14:27:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:53.589 14:27:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:53.589 14:27:45 -- common/autotest_common.sh@887 -- # return 0 00:25:53.589 14:27:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:53.589 14:27:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:53.589 14:27:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:25:53.849 /dev/nbd1 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:53.849 14:27:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:25:53.849 14:27:45 -- common/autotest_common.sh@867 -- # local i 00:25:53.849 14:27:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:25:53.849 14:27:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:25:53.849 14:27:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:25:53.849 14:27:45 -- common/autotest_common.sh@871 -- # break 00:25:53.849 14:27:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:25:53.849 14:27:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:25:53.849 14:27:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:53.849 1+0 records in 00:25:53.849 1+0 records out 00:25:53.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675232 s, 6.1 MB/s 00:25:53.849 14:27:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:53.849 14:27:45 -- common/autotest_common.sh@884 -- # size=4096 00:25:53.849 14:27:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:53.849 14:27:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:25:53.849 14:27:45 -- common/autotest_common.sh@887 -- # return 0 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:53.849 14:27:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:54.107 14:27:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:54.107 { 00:25:54.107 "nbd_device": "/dev/nbd0", 00:25:54.107 "bdev_name": "Nvme0n1p1" 00:25:54.107 }, 00:25:54.107 { 00:25:54.107 "nbd_device": "/dev/nbd1", 00:25:54.107 "bdev_name": "Nvme0n1p2" 00:25:54.107 } 00:25:54.107 ]' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:54.108 { 00:25:54.108 "nbd_device": "/dev/nbd0", 00:25:54.108 "bdev_name": "Nvme0n1p1" 00:25:54.108 }, 00:25:54.108 { 00:25:54.108 "nbd_device": "/dev/nbd1", 00:25:54.108 "bdev_name": "Nvme0n1p2" 00:25:54.108 } 00:25:54.108 ]' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:25:54.108 /dev/nbd1' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:25:54.108 /dev/nbd1' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@65 -- # count=2 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@95 -- # count=2 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:54.108 256+0 records in 00:25:54.108 256+0 records out 00:25:54.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00954624 s, 110 MB/s 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:54.108 14:27:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:54.367 256+0 records in 00:25:54.367 256+0 records out 00:25:54.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0904785 s, 11.6 MB/s 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:25:54.367 256+0 records in 00:25:54.367 256+0 records out 00:25:54.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0890831 s, 11.8 MB/s 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@51 -- # local i 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:54.367 14:27:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@41 -- # break 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:54.626 14:27:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@41 -- # break 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@45 -- # return 0 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:54.885 14:27:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@65 -- # true 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@65 -- # count=0 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@104 -- # count=0 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@109 -- # return 0 00:25:55.144 14:27:46 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:25:55.144 14:27:46 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:55.404 malloc_lvol_verify 00:25:55.404 14:27:47 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:55.404 2c760483-c6eb-4330-9609-c930ebc0a2c2 00:25:55.404 14:27:47 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:55.663 e14dd9fc-ae68-43ce-9eca-c2adb1343ac3 00:25:55.663 14:27:47 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:55.923 /dev/nbd0 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:25:55.923 mke2fs 1.46.5 (30-Dec-2021) 00:25:55.923 00:25:55.923 Filesystem too small for a journal 00:25:55.923 Discarding device blocks: 0/1024 done 00:25:55.923 Creating filesystem with 1024 4k blocks and 1024 inodes 00:25:55.923 00:25:55.923 Allocating group tables: 0/1 done 00:25:55.923 Writing inode tables: 0/1 done 00:25:55.923 Writing superblocks and filesystem accounting information: 0/1 done 00:25:55.923 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@51 -- # local i 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.923 14:27:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@41 -- # break 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@45 -- # return 0 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:25:56.183 14:27:48 -- bdev/nbd_common.sh@147 -- # return 0 00:25:56.183 14:27:48 -- bdev/blockdev.sh@324 -- # killprocess 146673 00:25:56.183 14:27:48 -- common/autotest_common.sh@936 -- # '[' -z 146673 ']' 00:25:56.183 14:27:48 -- common/autotest_common.sh@940 -- # kill -0 146673 00:25:56.183 14:27:48 -- common/autotest_common.sh@941 -- # uname 00:25:56.183 14:27:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:56.183 14:27:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146673 00:25:56.183 14:27:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:56.183 14:27:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:56.183 killing process with pid 146673 00:25:56.183 14:27:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146673' 00:25:56.183 14:27:48 -- common/autotest_common.sh@955 -- # kill 146673 00:25:56.183 14:27:48 -- common/autotest_common.sh@960 -- # wait 146673 00:25:56.443 14:27:48 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:25:56.443 00:25:56.443 real 0m5.560s 00:25:56.443 user 0m8.332s 00:25:56.443 sys 0m1.518s 00:25:56.443 14:27:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:56.443 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:25:56.443 ************************************ 00:25:56.443 END TEST bdev_nbd 00:25:56.443 ************************************ 00:25:56.443 14:27:48 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:25:56.443 14:27:48 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:25:56.443 14:27:48 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:25:56.443 skipping fio tests on NVMe due to multi-ns failures. 00:25:56.443 14:27:48 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:25:56.443 14:27:48 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:56.443 14:27:48 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:56.443 14:27:48 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:25:56.443 14:27:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:56.443 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:25:56.443 ************************************ 00:25:56.443 START TEST bdev_verify 00:25:56.443 ************************************ 00:25:56.443 14:27:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:56.443 [2024-11-18 14:27:48.457057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:56.443 [2024-11-18 14:27:48.457248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146911 ] 00:25:56.702 [2024-11-18 14:27:48.605214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:56.702 [2024-11-18 14:27:48.695126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.702 [2024-11-18 14:27:48.695143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.960 Running I/O for 5 seconds... 00:26:02.233 00:26:02.233 Latency(us) 00:26:02.233 [2024-11-18T14:27:54.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.233 [2024-11-18T14:27:54.307Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:02.233 Verification LBA range: start 0x0 length 0x4ff80 00:26:02.233 Nvme0n1p1 : 5.02 5500.56 21.49 0.00 0.00 23218.22 1482.01 27048.49 00:26:02.233 [2024-11-18T14:27:54.307Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:02.233 Verification LBA range: start 0x4ff80 length 0x4ff80 00:26:02.233 Nvme0n1p1 : 5.02 5511.26 21.53 0.00 0.00 23141.17 3157.64 21328.99 00:26:02.233 [2024-11-18T14:27:54.307Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:02.233 Verification LBA range: start 0x0 length 0x4ff7f 00:26:02.233 Nvme0n1p2 : 5.02 5499.05 21.48 0.00 0.00 23198.86 1742.66 25380.31 00:26:02.233 [2024-11-18T14:27:54.307Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:02.233 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:26:02.233 Nvme0n1p2 : 5.02 5513.57 21.54 0.00 0.00 23157.38 2978.91 23473.80 00:26:02.233 [2024-11-18T14:27:54.307Z] =================================================================================================================== 00:26:02.233 [2024-11-18T14:27:54.307Z] Total : 22024.45 86.03 0.00 0.00 23178.88 1482.01 27048.49 00:26:04.768 00:26:04.768 real 0m8.125s 00:26:04.768 user 0m15.008s 00:26:04.768 sys 0m0.278s 00:26:04.768 14:27:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:04.768 ************************************ 00:26:04.768 END TEST bdev_verify 00:26:04.768 ************************************ 00:26:04.768 14:27:56 -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 14:27:56 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:04.768 14:27:56 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:26:04.768 14:27:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:04.768 14:27:56 -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 ************************************ 00:26:04.768 START TEST bdev_verify_big_io 00:26:04.768 ************************************ 00:26:04.768 14:27:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:26:04.768 [2024-11-18 14:27:56.643014] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:04.768 [2024-11-18 14:27:56.643266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147031 ] 00:26:04.768 [2024-11-18 14:27:56.792747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:05.027 [2024-11-18 14:27:56.866387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.027 [2024-11-18 14:27:56.866433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.027 Running I/O for 5 seconds... 00:26:10.298 00:26:10.298 Latency(us) 00:26:10.298 [2024-11-18T14:28:02.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.298 [2024-11-18T14:28:02.372Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:10.298 Verification LBA range: start 0x0 length 0x4ff8 00:26:10.298 Nvme0n1p1 : 5.08 1142.00 71.38 0.00 0.00 110895.64 2263.97 168725.41 00:26:10.298 [2024-11-18T14:28:02.372Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:10.298 Verification LBA range: start 0x4ff8 length 0x4ff8 00:26:10.298 Nvme0n1p1 : 5.11 1220.93 76.31 0.00 0.00 103939.88 2546.97 153473.40 00:26:10.298 [2024-11-18T14:28:02.372Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:26:10.298 Verification LBA range: start 0x0 length 0x4ff7 00:26:10.299 Nvme0n1p2 : 5.09 1148.70 71.79 0.00 0.00 109148.56 588.33 121539.49 00:26:10.299 [2024-11-18T14:28:02.373Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:26:10.299 Verification LBA range: start 0x4ff7 length 0x4ff7 00:26:10.299 Nvme0n1p2 : 5.11 1220.32 76.27 0.00 0.00 102986.36 3440.64 137268.13 00:26:10.299 [2024-11-18T14:28:02.373Z] =================================================================================================================== 00:26:10.299 [2024-11-18T14:28:02.373Z] Total : 4731.96 295.75 0.00 0.00 106630.02 588.33 168725.41 00:26:10.558 00:26:10.558 real 0m5.943s 00:26:10.558 user 0m11.133s 00:26:10.558 sys 0m0.241s 00:26:10.558 14:28:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:10.558 ************************************ 00:26:10.558 END TEST bdev_verify_big_io 00:26:10.558 ************************************ 00:26:10.558 14:28:02 -- common/autotest_common.sh@10 -- # set +x 00:26:10.558 14:28:02 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:10.558 14:28:02 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:10.558 14:28:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:10.558 14:28:02 -- common/autotest_common.sh@10 -- # set +x 00:26:10.558 ************************************ 00:26:10.558 START TEST bdev_write_zeroes 00:26:10.558 ************************************ 00:26:10.558 14:28:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:10.817 [2024-11-18 14:28:02.634184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:10.817 [2024-11-18 14:28:02.634473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147128 ] 00:26:10.817 [2024-11-18 14:28:02.779342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.817 [2024-11-18 14:28:02.839283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.075 Running I/O for 1 seconds... 00:26:12.008 00:26:12.008 Latency(us) 00:26:12.008 [2024-11-18T14:28:04.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.008 [2024-11-18T14:28:04.082Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:12.008 Nvme0n1p1 : 1.00 27727.65 108.31 0.00 0.00 4606.74 2338.44 14954.12 00:26:12.008 [2024-11-18T14:28:04.082Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:26:12.008 Nvme0n1p2 : 1.01 27748.55 108.39 0.00 0.00 4596.45 2204.39 11021.96 00:26:12.008 [2024-11-18T14:28:04.082Z] =================================================================================================================== 00:26:12.008 [2024-11-18T14:28:04.082Z] Total : 55476.20 216.70 0.00 0.00 4601.59 2204.39 14954.12 00:26:12.267 00:26:12.267 real 0m1.713s 00:26:12.267 user 0m1.417s 00:26:12.267 sys 0m0.196s 00:26:12.267 ************************************ 00:26:12.267 END TEST bdev_write_zeroes 00:26:12.267 ************************************ 00:26:12.267 14:28:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:12.267 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:26:12.267 14:28:04 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:12.267 14:28:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:12.267 14:28:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.267 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:26:12.526 ************************************ 00:26:12.526 START TEST bdev_json_nonenclosed 00:26:12.526 ************************************ 00:26:12.526 14:28:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:12.526 [2024-11-18 14:28:04.388979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:12.526 [2024-11-18 14:28:04.389197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147172 ] 00:26:12.526 [2024-11-18 14:28:04.535949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.526 [2024-11-18 14:28:04.597821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.526 [2024-11-18 14:28:04.598416] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:26:12.526 [2024-11-18 14:28:04.598705] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:12.784 00:26:12.784 real 0m0.365s 00:26:12.784 user 0m0.140s 00:26:12.784 sys 0m0.124s 00:26:12.784 ************************************ 00:26:12.784 END TEST bdev_json_nonenclosed 00:26:12.784 ************************************ 00:26:12.784 14:28:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:12.784 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:26:12.784 14:28:04 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:12.784 14:28:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:26:12.784 14:28:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.784 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:26:12.784 ************************************ 00:26:12.784 START TEST bdev_json_nonarray 00:26:12.784 ************************************ 00:26:12.784 14:28:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:26:12.784 [2024-11-18 14:28:04.795374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:12.784 [2024-11-18 14:28:04.795571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147194 ] 00:26:13.043 [2024-11-18 14:28:04.933051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.043 [2024-11-18 14:28:05.014671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.043 [2024-11-18 14:28:05.015224] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:26:13.043 [2024-11-18 14:28:05.015419] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:13.302 00:26:13.302 real 0m0.369s 00:26:13.302 user 0m0.189s 00:26:13.302 sys 0m0.080s 00:26:13.302 14:28:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:13.302 14:28:05 -- common/autotest_common.sh@10 -- # set +x 00:26:13.302 ************************************ 00:26:13.302 END TEST bdev_json_nonarray 00:26:13.302 ************************************ 00:26:13.302 14:28:05 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:26:13.302 14:28:05 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:26:13.302 14:28:05 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:26:13.302 14:28:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:13.303 14:28:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.303 14:28:05 -- common/autotest_common.sh@10 -- # set +x 00:26:13.303 ************************************ 00:26:13.303 START TEST bdev_gpt_uuid 00:26:13.303 ************************************ 00:26:13.303 14:28:05 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:26:13.303 14:28:05 -- bdev/blockdev.sh@612 -- # local bdev 00:26:13.303 14:28:05 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:26:13.303 14:28:05 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147223 00:26:13.303 14:28:05 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:13.303 14:28:05 -- bdev/blockdev.sh@47 -- # waitforlisten 147223 00:26:13.303 14:28:05 -- common/autotest_common.sh@829 -- # '[' -z 147223 ']' 00:26:13.303 14:28:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.303 14:28:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.303 14:28:05 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:13.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.303 14:28:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.303 14:28:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.303 14:28:05 -- common/autotest_common.sh@10 -- # set +x 00:26:13.303 [2024-11-18 14:28:05.254969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:13.303 [2024-11-18 14:28:05.255501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147223 ] 00:26:13.562 [2024-11-18 14:28:05.416469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.562 [2024-11-18 14:28:05.471341] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:13.562 [2024-11-18 14:28:05.471857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.499 14:28:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.499 14:28:06 -- common/autotest_common.sh@862 -- # return 0 00:26:14.499 14:28:06 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:14.499 14:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.499 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:26:14.499 Some configs were skipped because the RPC state that can call them passed over. 00:26:14.499 14:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.499 14:28:06 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:26:14.499 14:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.499 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:26:14.499 14:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.499 14:28:06 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:26:14.499 14:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.499 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:26:14.499 14:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.499 14:28:06 -- bdev/blockdev.sh@619 -- # bdev='[ 00:26:14.499 { 00:26:14.499 "name": "Nvme0n1p1", 00:26:14.499 "aliases": [ 00:26:14.499 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:26:14.499 ], 00:26:14.499 "product_name": "GPT Disk", 00:26:14.499 "block_size": 4096, 00:26:14.499 "num_blocks": 655104, 00:26:14.499 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:26:14.499 "assigned_rate_limits": { 00:26:14.500 "rw_ios_per_sec": 0, 00:26:14.500 "rw_mbytes_per_sec": 0, 00:26:14.500 "r_mbytes_per_sec": 0, 00:26:14.500 "w_mbytes_per_sec": 0 00:26:14.500 }, 00:26:14.500 "claimed": false, 00:26:14.500 "zoned": false, 00:26:14.500 "supported_io_types": { 00:26:14.500 "read": true, 00:26:14.500 "write": true, 00:26:14.500 "unmap": true, 00:26:14.500 "write_zeroes": true, 00:26:14.500 "flush": true, 00:26:14.500 "reset": true, 00:26:14.500 "compare": true, 00:26:14.500 "compare_and_write": false, 00:26:14.500 "abort": true, 00:26:14.500 "nvme_admin": false, 00:26:14.500 "nvme_io": false 00:26:14.500 }, 00:26:14.500 "driver_specific": { 00:26:14.500 "gpt": { 00:26:14.500 "base_bdev": "Nvme0n1", 00:26:14.500 "offset_blocks": 256, 00:26:14.500 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:26:14.500 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:26:14.500 "partition_name": "SPDK_TEST_first" 00:26:14.500 } 00:26:14.500 } 00:26:14.500 } 00:26:14.500 ]' 00:26:14.500 14:28:06 -- bdev/blockdev.sh@620 -- # jq -r length 00:26:14.500 14:28:06 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:26:14.500 14:28:06 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:26:14.500 14:28:06 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:26:14.500 14:28:06 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:26:14.500 14:28:06 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:26:14.500 14:28:06 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:26:14.500 14:28:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.500 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:26:14.500 14:28:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.500 14:28:06 -- bdev/blockdev.sh@624 -- # bdev='[ 00:26:14.500 { 00:26:14.500 "name": "Nvme0n1p2", 00:26:14.500 "aliases": [ 00:26:14.500 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:26:14.500 ], 00:26:14.500 "product_name": "GPT Disk", 00:26:14.500 "block_size": 4096, 00:26:14.500 "num_blocks": 655103, 00:26:14.500 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:26:14.500 "assigned_rate_limits": { 00:26:14.500 "rw_ios_per_sec": 0, 00:26:14.500 "rw_mbytes_per_sec": 0, 00:26:14.500 "r_mbytes_per_sec": 0, 00:26:14.500 "w_mbytes_per_sec": 0 00:26:14.500 }, 00:26:14.500 "claimed": false, 00:26:14.500 "zoned": false, 00:26:14.500 "supported_io_types": { 00:26:14.500 "read": true, 00:26:14.500 "write": true, 00:26:14.500 "unmap": true, 00:26:14.500 "write_zeroes": true, 00:26:14.500 "flush": true, 00:26:14.500 "reset": true, 00:26:14.500 "compare": true, 00:26:14.500 "compare_and_write": false, 00:26:14.500 "abort": true, 00:26:14.500 "nvme_admin": false, 00:26:14.500 "nvme_io": false 00:26:14.500 }, 00:26:14.500 "driver_specific": { 00:26:14.500 "gpt": { 00:26:14.500 "base_bdev": "Nvme0n1", 00:26:14.500 "offset_blocks": 655360, 00:26:14.500 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:26:14.500 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:26:14.500 "partition_name": "SPDK_TEST_second" 00:26:14.500 } 00:26:14.500 } 00:26:14.500 } 00:26:14.500 ]' 00:26:14.500 14:28:06 -- bdev/blockdev.sh@625 -- # jq -r length 00:26:14.500 14:28:06 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:26:14.500 14:28:06 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:26:14.759 14:28:06 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:26:14.759 14:28:06 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:26:14.759 14:28:06 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:26:14.759 14:28:06 -- bdev/blockdev.sh@629 -- # killprocess 147223 00:26:14.759 14:28:06 -- common/autotest_common.sh@936 -- # '[' -z 147223 ']' 00:26:14.759 14:28:06 -- common/autotest_common.sh@940 -- # kill -0 147223 00:26:14.759 14:28:06 -- common/autotest_common.sh@941 -- # uname 00:26:14.759 14:28:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:14.759 14:28:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 147223 00:26:14.759 14:28:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:14.759 14:28:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:14.759 killing process with pid 147223 00:26:14.759 14:28:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 147223' 00:26:14.759 14:28:06 -- common/autotest_common.sh@955 -- # kill 147223 00:26:14.759 14:28:06 -- common/autotest_common.sh@960 -- # wait 147223 00:26:15.019 00:26:15.019 real 0m1.918s 00:26:15.019 user 0m2.235s 00:26:15.019 sys 0m0.426s 00:26:15.019 ************************************ 00:26:15.019 END TEST bdev_gpt_uuid 00:26:15.019 ************************************ 00:26:15.019 14:28:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:15.019 14:28:07 -- common/autotest_common.sh@10 -- # set +x 00:26:15.277 14:28:07 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:26:15.277 14:28:07 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:26:15.277 14:28:07 -- bdev/blockdev.sh@809 -- # cleanup 00:26:15.277 14:28:07 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:26:15.277 14:28:07 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:15.277 14:28:07 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:26:15.278 14:28:07 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:26:15.278 14:28:07 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:26:15.278 14:28:07 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:15.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:15.536 Waiting for block devices as requested 00:26:15.536 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:15.536 14:28:07 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:26:15.536 14:28:07 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:26:15.536 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:26:15.536 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:26:15.536 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:26:15.536 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:26:15.536 14:28:07 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:26:15.536 ************************************ 00:26:15.536 END TEST blockdev_nvme_gpt 00:26:15.536 ************************************ 00:26:15.536 00:26:15.536 real 0m33.194s 00:26:15.536 user 0m49.439s 00:26:15.536 sys 0m5.603s 00:26:15.536 14:28:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:15.536 14:28:07 -- common/autotest_common.sh@10 -- # set +x 00:26:15.796 14:28:07 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:26:15.796 14:28:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:15.796 14:28:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:15.796 14:28:07 -- common/autotest_common.sh@10 -- # set +x 00:26:15.796 ************************************ 00:26:15.796 START TEST nvme 00:26:15.796 ************************************ 00:26:15.796 14:28:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:26:15.796 * Looking for test storage... 00:26:15.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:15.796 14:28:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:15.796 14:28:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:15.796 14:28:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:15.796 14:28:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:15.796 14:28:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:15.796 14:28:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:15.796 14:28:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:15.796 14:28:07 -- scripts/common.sh@335 -- # IFS=.-: 00:26:15.796 14:28:07 -- scripts/common.sh@335 -- # read -ra ver1 00:26:15.796 14:28:07 -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.796 14:28:07 -- scripts/common.sh@336 -- # read -ra ver2 00:26:15.796 14:28:07 -- scripts/common.sh@337 -- # local 'op=<' 00:26:15.796 14:28:07 -- scripts/common.sh@339 -- # ver1_l=2 00:26:15.796 14:28:07 -- scripts/common.sh@340 -- # ver2_l=1 00:26:15.796 14:28:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:15.796 14:28:07 -- scripts/common.sh@343 -- # case "$op" in 00:26:15.796 14:28:07 -- scripts/common.sh@344 -- # : 1 00:26:15.796 14:28:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:15.796 14:28:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.796 14:28:07 -- scripts/common.sh@364 -- # decimal 1 00:26:15.796 14:28:07 -- scripts/common.sh@352 -- # local d=1 00:26:15.796 14:28:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.796 14:28:07 -- scripts/common.sh@354 -- # echo 1 00:26:15.796 14:28:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:15.796 14:28:07 -- scripts/common.sh@365 -- # decimal 2 00:26:15.796 14:28:07 -- scripts/common.sh@352 -- # local d=2 00:26:15.796 14:28:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.796 14:28:07 -- scripts/common.sh@354 -- # echo 2 00:26:15.796 14:28:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:15.796 14:28:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:15.796 14:28:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:15.796 14:28:07 -- scripts/common.sh@367 -- # return 0 00:26:15.796 14:28:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.796 14:28:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.796 --rc genhtml_branch_coverage=1 00:26:15.796 --rc genhtml_function_coverage=1 00:26:15.796 --rc genhtml_legend=1 00:26:15.796 --rc geninfo_all_blocks=1 00:26:15.796 --rc geninfo_unexecuted_blocks=1 00:26:15.796 00:26:15.796 ' 00:26:15.796 14:28:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.796 --rc genhtml_branch_coverage=1 00:26:15.796 --rc genhtml_function_coverage=1 00:26:15.796 --rc genhtml_legend=1 00:26:15.796 --rc geninfo_all_blocks=1 00:26:15.796 --rc geninfo_unexecuted_blocks=1 00:26:15.796 00:26:15.796 ' 00:26:15.796 14:28:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.796 --rc genhtml_branch_coverage=1 00:26:15.796 --rc genhtml_function_coverage=1 00:26:15.796 --rc genhtml_legend=1 00:26:15.796 --rc geninfo_all_blocks=1 00:26:15.796 --rc geninfo_unexecuted_blocks=1 00:26:15.796 00:26:15.796 ' 00:26:15.796 14:28:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:15.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.796 --rc genhtml_branch_coverage=1 00:26:15.796 --rc genhtml_function_coverage=1 00:26:15.796 --rc genhtml_legend=1 00:26:15.796 --rc geninfo_all_blocks=1 00:26:15.796 --rc geninfo_unexecuted_blocks=1 00:26:15.796 00:26:15.796 ' 00:26:15.796 14:28:07 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:16.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:16.365 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:17.741 14:28:09 -- nvme/nvme.sh@79 -- # uname 00:26:17.741 14:28:09 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:26:17.741 14:28:09 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:26:17.741 14:28:09 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:26:17.741 14:28:09 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:26:17.741 14:28:09 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:26:17.741 14:28:09 -- common/autotest_common.sh@1055 -- # echo 0 00:26:17.741 14:28:09 -- common/autotest_common.sh@1057 -- # stubpid=147628 00:26:17.741 Waiting for stub to ready for secondary processes... 00:26:17.741 14:28:09 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:26:17.741 14:28:09 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:26:17.741 14:28:09 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:17.741 14:28:09 -- common/autotest_common.sh@1061 -- # [[ -e /proc/147628 ]] 00:26:17.741 14:28:09 -- common/autotest_common.sh@1062 -- # sleep 1s 00:26:17.741 [2024-11-18 14:28:09.441355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:17.741 [2024-11-18 14:28:09.441583] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.678 14:28:10 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:18.678 14:28:10 -- common/autotest_common.sh@1061 -- # [[ -e /proc/147628 ]] 00:26:18.678 14:28:10 -- common/autotest_common.sh@1062 -- # sleep 1s 00:26:18.678 [2024-11-18 14:28:10.683450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:18.678 [2024-11-18 14:28:10.740897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.678 [2024-11-18 14:28:10.741406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.678 [2024-11-18 14:28:10.741451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.678 [2024-11-18 14:28:10.750342] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:26:18.937 [2024-11-18 14:28:10.759959] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:26:18.937 [2024-11-18 14:28:10.761056] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:26:19.504 14:28:11 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:26:19.504 14:28:11 -- common/autotest_common.sh@1064 -- # echo done. 00:26:19.504 done. 00:26:19.504 14:28:11 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:26:19.504 14:28:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:26:19.504 14:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.504 14:28:11 -- common/autotest_common.sh@10 -- # set +x 00:26:19.504 ************************************ 00:26:19.504 START TEST nvme_reset 00:26:19.504 ************************************ 00:26:19.504 14:28:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:26:19.763 Initializing NVMe Controllers 00:26:19.763 Skipping QEMU NVMe SSD at 0000:00:06.0 00:26:19.763 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:26:19.763 00:26:19.763 real 0m0.257s 00:26:19.763 user 0m0.092s 00:26:19.763 sys 0m0.097s 00:26:19.763 ************************************ 00:26:19.763 END TEST nvme_reset 00:26:19.763 14:28:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:19.763 14:28:11 -- common/autotest_common.sh@10 -- # set +x 00:26:19.763 ************************************ 00:26:19.763 14:28:11 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:26:19.763 14:28:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:19.763 14:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.763 14:28:11 -- common/autotest_common.sh@10 -- # set +x 00:26:19.763 ************************************ 00:26:19.763 START TEST nvme_identify 00:26:19.763 ************************************ 00:26:19.763 14:28:11 -- common/autotest_common.sh@1114 -- # nvme_identify 00:26:19.763 14:28:11 -- nvme/nvme.sh@12 -- # bdfs=() 00:26:19.763 14:28:11 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:26:19.763 14:28:11 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:26:19.763 14:28:11 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:26:19.763 14:28:11 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:19.763 14:28:11 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:19.763 14:28:11 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:19.763 14:28:11 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:19.763 14:28:11 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:19.763 14:28:11 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:19.763 14:28:11 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:26:19.763 14:28:11 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:26:20.023 [2024-11-18 14:28:12.031479] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 147670 terminated unexpected 00:26:20.023 ===================================================== 00:26:20.023 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:20.023 ===================================================== 00:26:20.023 Controller Capabilities/Features 00:26:20.023 ================================ 00:26:20.023 Vendor ID: 1b36 00:26:20.023 Subsystem Vendor ID: 1af4 00:26:20.023 Serial Number: 12340 00:26:20.023 Model Number: QEMU NVMe Ctrl 00:26:20.023 Firmware Version: 8.0.0 00:26:20.023 Recommended Arb Burst: 6 00:26:20.023 IEEE OUI Identifier: 00 54 52 00:26:20.023 Multi-path I/O 00:26:20.023 May have multiple subsystem ports: No 00:26:20.023 May have multiple controllers: No 00:26:20.023 Associated with SR-IOV VF: No 00:26:20.023 Max Data Transfer Size: 524288 00:26:20.023 Max Number of Namespaces: 256 00:26:20.023 Max Number of I/O Queues: 64 00:26:20.023 NVMe Specification Version (VS): 1.4 00:26:20.023 NVMe Specification Version (Identify): 1.4 00:26:20.023 Maximum Queue Entries: 2048 00:26:20.023 Contiguous Queues Required: Yes 00:26:20.023 Arbitration Mechanisms Supported 00:26:20.023 Weighted Round Robin: Not Supported 00:26:20.023 Vendor Specific: Not Supported 00:26:20.023 Reset Timeout: 7500 ms 00:26:20.023 Doorbell Stride: 4 bytes 00:26:20.023 NVM Subsystem Reset: Not Supported 00:26:20.023 Command Sets Supported 00:26:20.023 NVM Command Set: Supported 00:26:20.023 Boot Partition: Not Supported 00:26:20.023 Memory Page Size Minimum: 4096 bytes 00:26:20.023 Memory Page Size Maximum: 65536 bytes 00:26:20.023 Persistent Memory Region: Not Supported 00:26:20.023 Optional Asynchronous Events Supported 00:26:20.023 Namespace Attribute Notices: Supported 00:26:20.023 Firmware Activation Notices: Not Supported 00:26:20.023 ANA Change Notices: Not Supported 00:26:20.023 PLE Aggregate Log Change Notices: Not Supported 00:26:20.023 LBA Status Info Alert Notices: Not Supported 00:26:20.023 EGE Aggregate Log Change Notices: Not Supported 00:26:20.023 Normal NVM Subsystem Shutdown event: Not Supported 00:26:20.023 Zone Descriptor Change Notices: Not Supported 00:26:20.023 Discovery Log Change Notices: Not Supported 00:26:20.023 Controller Attributes 00:26:20.023 128-bit Host Identifier: Not Supported 00:26:20.023 Non-Operational Permissive Mode: Not Supported 00:26:20.023 NVM Sets: Not Supported 00:26:20.023 Read Recovery Levels: Not Supported 00:26:20.023 Endurance Groups: Not Supported 00:26:20.023 Predictable Latency Mode: Not Supported 00:26:20.023 Traffic Based Keep ALive: Not Supported 00:26:20.023 Namespace Granularity: Not Supported 00:26:20.023 SQ Associations: Not Supported 00:26:20.023 UUID List: Not Supported 00:26:20.023 Multi-Domain Subsystem: Not Supported 00:26:20.023 Fixed Capacity Management: Not Supported 00:26:20.023 Variable Capacity Management: Not Supported 00:26:20.023 Delete Endurance Group: Not Supported 00:26:20.023 Delete NVM Set: Not Supported 00:26:20.023 Extended LBA Formats Supported: Supported 00:26:20.023 Flexible Data Placement Supported: Not Supported 00:26:20.023 00:26:20.023 Controller Memory Buffer Support 00:26:20.023 ================================ 00:26:20.023 Supported: No 00:26:20.023 00:26:20.023 Persistent Memory Region Support 00:26:20.023 ================================ 00:26:20.023 Supported: No 00:26:20.023 00:26:20.023 Admin Command Set Attributes 00:26:20.023 ============================ 00:26:20.023 Security Send/Receive: Not Supported 00:26:20.023 Format NVM: Supported 00:26:20.023 Firmware Activate/Download: Not Supported 00:26:20.023 Namespace Management: Supported 00:26:20.023 Device Self-Test: Not Supported 00:26:20.023 Directives: Supported 00:26:20.023 NVMe-MI: Not Supported 00:26:20.023 Virtualization Management: Not Supported 00:26:20.023 Doorbell Buffer Config: Supported 00:26:20.023 Get LBA Status Capability: Not Supported 00:26:20.023 Command & Feature Lockdown Capability: Not Supported 00:26:20.023 Abort Command Limit: 4 00:26:20.023 Async Event Request Limit: 4 00:26:20.023 Number of Firmware Slots: N/A 00:26:20.023 Firmware Slot 1 Read-Only: N/A 00:26:20.023 Firmware Activation Without Reset: N/A 00:26:20.023 Multiple Update Detection Support: N/A 00:26:20.023 Firmware Update Granularity: No Information Provided 00:26:20.023 Per-Namespace SMART Log: Yes 00:26:20.023 Asymmetric Namespace Access Log Page: Not Supported 00:26:20.023 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:26:20.024 Command Effects Log Page: Supported 00:26:20.024 Get Log Page Extended Data: Supported 00:26:20.024 Telemetry Log Pages: Not Supported 00:26:20.024 Persistent Event Log Pages: Not Supported 00:26:20.024 Supported Log Pages Log Page: May Support 00:26:20.024 Commands Supported & Effects Log Page: Not Supported 00:26:20.024 Feature Identifiers & Effects Log Page:May Support 00:26:20.024 NVMe-MI Commands & Effects Log Page: May Support 00:26:20.024 Data Area 4 for Telemetry Log: Not Supported 00:26:20.024 Error Log Page Entries Supported: 1 00:26:20.024 Keep Alive: Not Supported 00:26:20.024 00:26:20.024 NVM Command Set Attributes 00:26:20.024 ========================== 00:26:20.024 Submission Queue Entry Size 00:26:20.024 Max: 64 00:26:20.024 Min: 64 00:26:20.024 Completion Queue Entry Size 00:26:20.024 Max: 16 00:26:20.024 Min: 16 00:26:20.024 Number of Namespaces: 256 00:26:20.024 Compare Command: Supported 00:26:20.024 Write Uncorrectable Command: Not Supported 00:26:20.024 Dataset Management Command: Supported 00:26:20.024 Write Zeroes Command: Supported 00:26:20.024 Set Features Save Field: Supported 00:26:20.024 Reservations: Not Supported 00:26:20.024 Timestamp: Supported 00:26:20.024 Copy: Supported 00:26:20.024 Volatile Write Cache: Present 00:26:20.024 Atomic Write Unit (Normal): 1 00:26:20.024 Atomic Write Unit (PFail): 1 00:26:20.024 Atomic Compare & Write Unit: 1 00:26:20.024 Fused Compare & Write: Not Supported 00:26:20.024 Scatter-Gather List 00:26:20.024 SGL Command Set: Supported 00:26:20.024 SGL Keyed: Not Supported 00:26:20.024 SGL Bit Bucket Descriptor: Not Supported 00:26:20.024 SGL Metadata Pointer: Not Supported 00:26:20.024 Oversized SGL: Not Supported 00:26:20.024 SGL Metadata Address: Not Supported 00:26:20.024 SGL Offset: Not Supported 00:26:20.024 Transport SGL Data Block: Not Supported 00:26:20.024 Replay Protected Memory Block: Not Supported 00:26:20.024 00:26:20.024 Firmware Slot Information 00:26:20.024 ========================= 00:26:20.024 Active slot: 1 00:26:20.024 Slot 1 Firmware Revision: 1.0 00:26:20.024 00:26:20.024 00:26:20.024 Commands Supported and Effects 00:26:20.024 ============================== 00:26:20.024 Admin Commands 00:26:20.024 -------------- 00:26:20.024 Delete I/O Submission Queue (00h): Supported 00:26:20.024 Create I/O Submission Queue (01h): Supported 00:26:20.024 Get Log Page (02h): Supported 00:26:20.024 Delete I/O Completion Queue (04h): Supported 00:26:20.024 Create I/O Completion Queue (05h): Supported 00:26:20.024 Identify (06h): Supported 00:26:20.024 Abort (08h): Supported 00:26:20.024 Set Features (09h): Supported 00:26:20.024 Get Features (0Ah): Supported 00:26:20.024 Asynchronous Event Request (0Ch): Supported 00:26:20.024 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:20.024 Directive Send (19h): Supported 00:26:20.024 Directive Receive (1Ah): Supported 00:26:20.024 Virtualization Management (1Ch): Supported 00:26:20.024 Doorbell Buffer Config (7Ch): Supported 00:26:20.024 Format NVM (80h): Supported LBA-Change 00:26:20.024 I/O Commands 00:26:20.024 ------------ 00:26:20.024 Flush (00h): Supported LBA-Change 00:26:20.024 Write (01h): Supported LBA-Change 00:26:20.024 Read (02h): Supported 00:26:20.024 Compare (05h): Supported 00:26:20.024 Write Zeroes (08h): Supported LBA-Change 00:26:20.024 Dataset Management (09h): Supported LBA-Change 00:26:20.024 Unknown (0Ch): Supported 00:26:20.024 Unknown (12h): Supported 00:26:20.024 Copy (19h): Supported LBA-Change 00:26:20.024 Unknown (1Dh): Supported LBA-Change 00:26:20.024 00:26:20.024 Error Log 00:26:20.024 ========= 00:26:20.024 00:26:20.024 Arbitration 00:26:20.024 =========== 00:26:20.024 Arbitration Burst: no limit 00:26:20.024 00:26:20.024 Power Management 00:26:20.024 ================ 00:26:20.024 Number of Power States: 1 00:26:20.024 Current Power State: Power State #0 00:26:20.024 Power State #0: 00:26:20.024 Max Power: 25.00 W 00:26:20.024 Non-Operational State: Operational 00:26:20.024 Entry Latency: 16 microseconds 00:26:20.024 Exit Latency: 4 microseconds 00:26:20.024 Relative Read Throughput: 0 00:26:20.024 Relative Read Latency: 0 00:26:20.024 Relative Write Throughput: 0 00:26:20.024 Relative Write Latency: 0 00:26:20.024 Idle Power: Not Reported 00:26:20.024 Active Power: Not Reported 00:26:20.024 Non-Operational Permissive Mode: Not Supported 00:26:20.024 00:26:20.024 Health Information 00:26:20.024 ================== 00:26:20.024 Critical Warnings: 00:26:20.024 Available Spare Space: OK 00:26:20.024 Temperature: OK 00:26:20.024 Device Reliability: OK 00:26:20.024 Read Only: No 00:26:20.024 Volatile Memory Backup: OK 00:26:20.024 Current Temperature: 323 Kelvin (50 Celsius) 00:26:20.024 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:20.024 Available Spare: 0% 00:26:20.024 Available Spare Threshold: 0% 00:26:20.024 Life Percentage Used: 0% 00:26:20.024 Data Units Read: 8831 00:26:20.024 Data Units Written: 4315 00:26:20.024 Host Read Commands: 307628 00:26:20.024 Host Write Commands: 169043 00:26:20.024 Controller Busy Time: 0 minutes 00:26:20.024 Power Cycles: 0 00:26:20.024 Power On Hours: 0 hours 00:26:20.024 Unsafe Shutdowns: 0 00:26:20.024 Unrecoverable Media Errors: 0 00:26:20.024 Lifetime Error Log Entries: 0 00:26:20.024 Warning Temperature Time: 0 minutes 00:26:20.024 Critical Temperature Time: 0 minutes 00:26:20.024 00:26:20.024 Number of Queues 00:26:20.024 ================ 00:26:20.024 Number of I/O Submission Queues: 64 00:26:20.024 Number of I/O Completion Queues: 64 00:26:20.024 00:26:20.024 ZNS Specific Controller Data 00:26:20.024 ============================ 00:26:20.024 Zone Append Size Limit: 0 00:26:20.024 00:26:20.024 00:26:20.024 Active Namespaces 00:26:20.024 ================= 00:26:20.024 Namespace ID:1 00:26:20.024 Error Recovery Timeout: Unlimited 00:26:20.024 Command Set Identifier: NVM (00h) 00:26:20.024 Deallocate: Supported 00:26:20.024 Deallocated/Unwritten Error: Supported 00:26:20.024 Deallocated Read Value: All 0x00 00:26:20.024 Deallocate in Write Zeroes: Not Supported 00:26:20.024 Deallocated Guard Field: 0xFFFF 00:26:20.024 Flush: Supported 00:26:20.024 Reservation: Not Supported 00:26:20.024 Namespace Sharing Capabilities: Private 00:26:20.024 Size (in LBAs): 1310720 (5GiB) 00:26:20.024 Capacity (in LBAs): 1310720 (5GiB) 00:26:20.024 Utilization (in LBAs): 1310720 (5GiB) 00:26:20.024 Thin Provisioning: Not Supported 00:26:20.024 Per-NS Atomic Units: No 00:26:20.024 Maximum Single Source Range Length: 128 00:26:20.024 Maximum Copy Length: 128 00:26:20.024 Maximum Source Range Count: 128 00:26:20.024 NGUID/EUI64 Never Reused: No 00:26:20.024 Namespace Write Protected: No 00:26:20.024 Number of LBA Formats: 8 00:26:20.024 Current LBA Format: LBA Format #04 00:26:20.024 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:20.024 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:20.024 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:20.024 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:20.024 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:20.024 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:20.024 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:20.024 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:20.024 00:26:20.024 14:28:12 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:26:20.024 14:28:12 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:26:20.284 ===================================================== 00:26:20.284 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:20.284 ===================================================== 00:26:20.284 Controller Capabilities/Features 00:26:20.284 ================================ 00:26:20.284 Vendor ID: 1b36 00:26:20.284 Subsystem Vendor ID: 1af4 00:26:20.284 Serial Number: 12340 00:26:20.284 Model Number: QEMU NVMe Ctrl 00:26:20.284 Firmware Version: 8.0.0 00:26:20.284 Recommended Arb Burst: 6 00:26:20.284 IEEE OUI Identifier: 00 54 52 00:26:20.284 Multi-path I/O 00:26:20.284 May have multiple subsystem ports: No 00:26:20.284 May have multiple controllers: No 00:26:20.284 Associated with SR-IOV VF: No 00:26:20.284 Max Data Transfer Size: 524288 00:26:20.284 Max Number of Namespaces: 256 00:26:20.284 Max Number of I/O Queues: 64 00:26:20.284 NVMe Specification Version (VS): 1.4 00:26:20.284 NVMe Specification Version (Identify): 1.4 00:26:20.284 Maximum Queue Entries: 2048 00:26:20.284 Contiguous Queues Required: Yes 00:26:20.284 Arbitration Mechanisms Supported 00:26:20.284 Weighted Round Robin: Not Supported 00:26:20.284 Vendor Specific: Not Supported 00:26:20.284 Reset Timeout: 7500 ms 00:26:20.284 Doorbell Stride: 4 bytes 00:26:20.284 NVM Subsystem Reset: Not Supported 00:26:20.284 Command Sets Supported 00:26:20.284 NVM Command Set: Supported 00:26:20.284 Boot Partition: Not Supported 00:26:20.284 Memory Page Size Minimum: 4096 bytes 00:26:20.284 Memory Page Size Maximum: 65536 bytes 00:26:20.284 Persistent Memory Region: Not Supported 00:26:20.284 Optional Asynchronous Events Supported 00:26:20.284 Namespace Attribute Notices: Supported 00:26:20.284 Firmware Activation Notices: Not Supported 00:26:20.284 ANA Change Notices: Not Supported 00:26:20.284 PLE Aggregate Log Change Notices: Not Supported 00:26:20.284 LBA Status Info Alert Notices: Not Supported 00:26:20.284 EGE Aggregate Log Change Notices: Not Supported 00:26:20.284 Normal NVM Subsystem Shutdown event: Not Supported 00:26:20.284 Zone Descriptor Change Notices: Not Supported 00:26:20.284 Discovery Log Change Notices: Not Supported 00:26:20.284 Controller Attributes 00:26:20.284 128-bit Host Identifier: Not Supported 00:26:20.284 Non-Operational Permissive Mode: Not Supported 00:26:20.284 NVM Sets: Not Supported 00:26:20.284 Read Recovery Levels: Not Supported 00:26:20.284 Endurance Groups: Not Supported 00:26:20.284 Predictable Latency Mode: Not Supported 00:26:20.284 Traffic Based Keep ALive: Not Supported 00:26:20.284 Namespace Granularity: Not Supported 00:26:20.284 SQ Associations: Not Supported 00:26:20.284 UUID List: Not Supported 00:26:20.284 Multi-Domain Subsystem: Not Supported 00:26:20.284 Fixed Capacity Management: Not Supported 00:26:20.284 Variable Capacity Management: Not Supported 00:26:20.284 Delete Endurance Group: Not Supported 00:26:20.284 Delete NVM Set: Not Supported 00:26:20.284 Extended LBA Formats Supported: Supported 00:26:20.284 Flexible Data Placement Supported: Not Supported 00:26:20.284 00:26:20.284 Controller Memory Buffer Support 00:26:20.284 ================================ 00:26:20.284 Supported: No 00:26:20.284 00:26:20.284 Persistent Memory Region Support 00:26:20.284 ================================ 00:26:20.284 Supported: No 00:26:20.284 00:26:20.284 Admin Command Set Attributes 00:26:20.284 ============================ 00:26:20.284 Security Send/Receive: Not Supported 00:26:20.284 Format NVM: Supported 00:26:20.284 Firmware Activate/Download: Not Supported 00:26:20.284 Namespace Management: Supported 00:26:20.284 Device Self-Test: Not Supported 00:26:20.284 Directives: Supported 00:26:20.285 NVMe-MI: Not Supported 00:26:20.285 Virtualization Management: Not Supported 00:26:20.285 Doorbell Buffer Config: Supported 00:26:20.285 Get LBA Status Capability: Not Supported 00:26:20.285 Command & Feature Lockdown Capability: Not Supported 00:26:20.285 Abort Command Limit: 4 00:26:20.285 Async Event Request Limit: 4 00:26:20.285 Number of Firmware Slots: N/A 00:26:20.285 Firmware Slot 1 Read-Only: N/A 00:26:20.285 Firmware Activation Without Reset: N/A 00:26:20.285 Multiple Update Detection Support: N/A 00:26:20.285 Firmware Update Granularity: No Information Provided 00:26:20.285 Per-Namespace SMART Log: Yes 00:26:20.285 Asymmetric Namespace Access Log Page: Not Supported 00:26:20.285 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:26:20.285 Command Effects Log Page: Supported 00:26:20.285 Get Log Page Extended Data: Supported 00:26:20.285 Telemetry Log Pages: Not Supported 00:26:20.285 Persistent Event Log Pages: Not Supported 00:26:20.285 Supported Log Pages Log Page: May Support 00:26:20.285 Commands Supported & Effects Log Page: Not Supported 00:26:20.285 Feature Identifiers & Effects Log Page:May Support 00:26:20.285 NVMe-MI Commands & Effects Log Page: May Support 00:26:20.285 Data Area 4 for Telemetry Log: Not Supported 00:26:20.285 Error Log Page Entries Supported: 1 00:26:20.285 Keep Alive: Not Supported 00:26:20.285 00:26:20.285 NVM Command Set Attributes 00:26:20.285 ========================== 00:26:20.285 Submission Queue Entry Size 00:26:20.285 Max: 64 00:26:20.285 Min: 64 00:26:20.285 Completion Queue Entry Size 00:26:20.285 Max: 16 00:26:20.285 Min: 16 00:26:20.285 Number of Namespaces: 256 00:26:20.285 Compare Command: Supported 00:26:20.285 Write Uncorrectable Command: Not Supported 00:26:20.285 Dataset Management Command: Supported 00:26:20.285 Write Zeroes Command: Supported 00:26:20.285 Set Features Save Field: Supported 00:26:20.285 Reservations: Not Supported 00:26:20.285 Timestamp: Supported 00:26:20.285 Copy: Supported 00:26:20.285 Volatile Write Cache: Present 00:26:20.285 Atomic Write Unit (Normal): 1 00:26:20.285 Atomic Write Unit (PFail): 1 00:26:20.285 Atomic Compare & Write Unit: 1 00:26:20.285 Fused Compare & Write: Not Supported 00:26:20.285 Scatter-Gather List 00:26:20.285 SGL Command Set: Supported 00:26:20.285 SGL Keyed: Not Supported 00:26:20.285 SGL Bit Bucket Descriptor: Not Supported 00:26:20.285 SGL Metadata Pointer: Not Supported 00:26:20.285 Oversized SGL: Not Supported 00:26:20.285 SGL Metadata Address: Not Supported 00:26:20.285 SGL Offset: Not Supported 00:26:20.285 Transport SGL Data Block: Not Supported 00:26:20.285 Replay Protected Memory Block: Not Supported 00:26:20.285 00:26:20.285 Firmware Slot Information 00:26:20.285 ========================= 00:26:20.285 Active slot: 1 00:26:20.285 Slot 1 Firmware Revision: 1.0 00:26:20.285 00:26:20.285 00:26:20.285 Commands Supported and Effects 00:26:20.285 ============================== 00:26:20.285 Admin Commands 00:26:20.285 -------------- 00:26:20.285 Delete I/O Submission Queue (00h): Supported 00:26:20.285 Create I/O Submission Queue (01h): Supported 00:26:20.285 Get Log Page (02h): Supported 00:26:20.285 Delete I/O Completion Queue (04h): Supported 00:26:20.285 Create I/O Completion Queue (05h): Supported 00:26:20.285 Identify (06h): Supported 00:26:20.285 Abort (08h): Supported 00:26:20.285 Set Features (09h): Supported 00:26:20.285 Get Features (0Ah): Supported 00:26:20.285 Asynchronous Event Request (0Ch): Supported 00:26:20.285 Namespace Attachment (15h): Supported NS-Inventory-Change 00:26:20.285 Directive Send (19h): Supported 00:26:20.285 Directive Receive (1Ah): Supported 00:26:20.285 Virtualization Management (1Ch): Supported 00:26:20.285 Doorbell Buffer Config (7Ch): Supported 00:26:20.285 Format NVM (80h): Supported LBA-Change 00:26:20.285 I/O Commands 00:26:20.285 ------------ 00:26:20.285 Flush (00h): Supported LBA-Change 00:26:20.285 Write (01h): Supported LBA-Change 00:26:20.285 Read (02h): Supported 00:26:20.285 Compare (05h): Supported 00:26:20.285 Write Zeroes (08h): Supported LBA-Change 00:26:20.285 Dataset Management (09h): Supported LBA-Change 00:26:20.285 Unknown (0Ch): Supported 00:26:20.285 Unknown (12h): Supported 00:26:20.285 Copy (19h): Supported LBA-Change 00:26:20.285 Unknown (1Dh): Supported LBA-Change 00:26:20.285 00:26:20.285 Error Log 00:26:20.285 ========= 00:26:20.285 00:26:20.285 Arbitration 00:26:20.285 =========== 00:26:20.285 Arbitration Burst: no limit 00:26:20.285 00:26:20.285 Power Management 00:26:20.285 ================ 00:26:20.285 Number of Power States: 1 00:26:20.285 Current Power State: Power State #0 00:26:20.285 Power State #0: 00:26:20.285 Max Power: 25.00 W 00:26:20.285 Non-Operational State: Operational 00:26:20.285 Entry Latency: 16 microseconds 00:26:20.285 Exit Latency: 4 microseconds 00:26:20.285 Relative Read Throughput: 0 00:26:20.285 Relative Read Latency: 0 00:26:20.285 Relative Write Throughput: 0 00:26:20.285 Relative Write Latency: 0 00:26:20.285 Idle Power: Not Reported 00:26:20.285 Active Power: Not Reported 00:26:20.285 Non-Operational Permissive Mode: Not Supported 00:26:20.285 00:26:20.285 Health Information 00:26:20.285 ================== 00:26:20.285 Critical Warnings: 00:26:20.285 Available Spare Space: OK 00:26:20.285 Temperature: OK 00:26:20.285 Device Reliability: OK 00:26:20.285 Read Only: No 00:26:20.285 Volatile Memory Backup: OK 00:26:20.285 Current Temperature: 323 Kelvin (50 Celsius) 00:26:20.285 Temperature Threshold: 343 Kelvin (70 Celsius) 00:26:20.285 Available Spare: 0% 00:26:20.285 Available Spare Threshold: 0% 00:26:20.285 Life Percentage Used: 0% 00:26:20.285 Data Units Read: 8831 00:26:20.285 Data Units Written: 4315 00:26:20.285 Host Read Commands: 307628 00:26:20.285 Host Write Commands: 169043 00:26:20.285 Controller Busy Time: 0 minutes 00:26:20.285 Power Cycles: 0 00:26:20.285 Power On Hours: 0 hours 00:26:20.285 Unsafe Shutdowns: 0 00:26:20.285 Unrecoverable Media Errors: 0 00:26:20.285 Lifetime Error Log Entries: 0 00:26:20.285 Warning Temperature Time: 0 minutes 00:26:20.285 Critical Temperature Time: 0 minutes 00:26:20.285 00:26:20.285 Number of Queues 00:26:20.285 ================ 00:26:20.285 Number of I/O Submission Queues: 64 00:26:20.285 Number of I/O Completion Queues: 64 00:26:20.285 00:26:20.285 ZNS Specific Controller Data 00:26:20.285 ============================ 00:26:20.285 Zone Append Size Limit: 0 00:26:20.285 00:26:20.285 00:26:20.285 Active Namespaces 00:26:20.285 ================= 00:26:20.285 Namespace ID:1 00:26:20.285 Error Recovery Timeout: Unlimited 00:26:20.285 Command Set Identifier: NVM (00h) 00:26:20.285 Deallocate: Supported 00:26:20.285 Deallocated/Unwritten Error: Supported 00:26:20.285 Deallocated Read Value: All 0x00 00:26:20.285 Deallocate in Write Zeroes: Not Supported 00:26:20.285 Deallocated Guard Field: 0xFFFF 00:26:20.285 Flush: Supported 00:26:20.285 Reservation: Not Supported 00:26:20.285 Namespace Sharing Capabilities: Private 00:26:20.285 Size (in LBAs): 1310720 (5GiB) 00:26:20.285 Capacity (in LBAs): 1310720 (5GiB) 00:26:20.285 Utilization (in LBAs): 1310720 (5GiB) 00:26:20.285 Thin Provisioning: Not Supported 00:26:20.285 Per-NS Atomic Units: No 00:26:20.285 Maximum Single Source Range Length: 128 00:26:20.285 Maximum Copy Length: 128 00:26:20.286 Maximum Source Range Count: 128 00:26:20.286 NGUID/EUI64 Never Reused: No 00:26:20.286 Namespace Write Protected: No 00:26:20.286 Number of LBA Formats: 8 00:26:20.286 Current LBA Format: LBA Format #04 00:26:20.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:20.286 LBA Format #01: Data Size: 512 Metadata Size: 8 00:26:20.286 LBA Format #02: Data Size: 512 Metadata Size: 16 00:26:20.286 LBA Format #03: Data Size: 512 Metadata Size: 64 00:26:20.286 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:26:20.286 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:26:20.286 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:26:20.286 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:26:20.286 00:26:20.544 00:26:20.544 real 0m0.629s 00:26:20.544 user 0m0.247s 00:26:20.544 sys 0m0.264s 00:26:20.544 14:28:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:20.544 ************************************ 00:26:20.544 END TEST nvme_identify 00:26:20.544 ************************************ 00:26:20.544 14:28:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.544 14:28:12 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:26:20.544 14:28:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:20.545 14:28:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.545 14:28:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.545 ************************************ 00:26:20.545 START TEST nvme_perf 00:26:20.545 ************************************ 00:26:20.545 14:28:12 -- common/autotest_common.sh@1114 -- # nvme_perf 00:26:20.545 14:28:12 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:26:21.923 Initializing NVMe Controllers 00:26:21.923 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:21.923 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:21.923 Initialization complete. Launching workers. 00:26:21.923 ======================================================== 00:26:21.923 Latency(us) 00:26:21.923 Device Information : IOPS MiB/s Average min max 00:26:21.923 PCIE (0000:00:06.0) NSID 1 from core 0: 53500.77 626.96 2394.57 758.67 5274.97 00:26:21.923 ======================================================== 00:26:21.923 Total : 53500.77 626.96 2394.57 758.67 5274.97 00:26:21.923 00:26:21.923 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:21.923 ================================================================================= 00:26:21.923 1.00000% : 1519.244us 00:26:21.923 10.00000% : 1712.873us 00:26:21.923 25.00000% : 1966.080us 00:26:21.923 50.00000% : 2368.233us 00:26:21.923 75.00000% : 2755.491us 00:26:21.923 90.00000% : 3142.749us 00:26:21.923 95.00000% : 3410.851us 00:26:21.923 98.00000% : 3604.480us 00:26:21.923 99.00000% : 3708.742us 00:26:21.923 99.50000% : 3991.738us 00:26:21.923 99.90000% : 4647.098us 00:26:21.923 99.99000% : 5153.513us 00:26:21.923 99.99900% : 5302.458us 00:26:21.923 99.99990% : 5302.458us 00:26:21.923 99.99999% : 5302.458us 00:26:21.923 00:26:21.923 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:21.923 ============================================================================== 00:26:21.923 Range in us Cumulative IO count 00:26:21.923 755.898 - 759.622: 0.0019% ( 1) 00:26:21.923 1288.378 - 1295.825: 0.0037% ( 1) 00:26:21.923 1310.720 - 1318.167: 0.0075% ( 2) 00:26:21.923 1325.615 - 1333.062: 0.0093% ( 1) 00:26:21.923 1333.062 - 1340.509: 0.0112% ( 1) 00:26:21.923 1340.509 - 1347.956: 0.0131% ( 1) 00:26:21.923 1347.956 - 1355.404: 0.0168% ( 2) 00:26:21.923 1355.404 - 1362.851: 0.0187% ( 1) 00:26:21.923 1362.851 - 1370.298: 0.0206% ( 1) 00:26:21.923 1370.298 - 1377.745: 0.0262% ( 3) 00:26:21.923 1377.745 - 1385.193: 0.0280% ( 1) 00:26:21.923 1385.193 - 1392.640: 0.0336% ( 3) 00:26:21.923 1392.640 - 1400.087: 0.0411% ( 4) 00:26:21.923 1400.087 - 1407.535: 0.0523% ( 6) 00:26:21.923 1407.535 - 1414.982: 0.0598% ( 4) 00:26:21.923 1414.982 - 1422.429: 0.0710% ( 6) 00:26:21.923 1422.429 - 1429.876: 0.0860% ( 8) 00:26:21.923 1429.876 - 1437.324: 0.1177% ( 17) 00:26:21.923 1437.324 - 1444.771: 0.1476% ( 16) 00:26:21.923 1444.771 - 1452.218: 0.1682% ( 11) 00:26:21.923 1452.218 - 1459.665: 0.2299% ( 33) 00:26:21.923 1459.665 - 1467.113: 0.2878% ( 31) 00:26:21.923 1467.113 - 1474.560: 0.3682% ( 43) 00:26:21.923 1474.560 - 1482.007: 0.4336% ( 35) 00:26:21.923 1482.007 - 1489.455: 0.5214% ( 47) 00:26:21.923 1489.455 - 1496.902: 0.6112% ( 48) 00:26:21.923 1496.902 - 1504.349: 0.7308% ( 64) 00:26:21.923 1504.349 - 1511.796: 0.8822% ( 81) 00:26:21.923 1511.796 - 1519.244: 1.0279% ( 78) 00:26:21.923 1519.244 - 1526.691: 1.2279% ( 107) 00:26:21.923 1526.691 - 1534.138: 1.4242% ( 105) 00:26:21.923 1534.138 - 1541.585: 1.6204% ( 105) 00:26:21.923 1541.585 - 1549.033: 1.8596% ( 128) 00:26:21.923 1549.033 - 1556.480: 2.1213% ( 140) 00:26:21.923 1556.480 - 1563.927: 2.3792% ( 138) 00:26:21.923 1563.927 - 1571.375: 2.6857% ( 164) 00:26:21.923 1571.375 - 1578.822: 2.9810% ( 158) 00:26:21.923 1578.822 - 1586.269: 3.2819% ( 161) 00:26:21.923 1586.269 - 1593.716: 3.6501% ( 197) 00:26:21.923 1593.716 - 1601.164: 4.0015% ( 188) 00:26:21.923 1601.164 - 1608.611: 4.3417% ( 182) 00:26:21.923 1608.611 - 1616.058: 4.7211% ( 203) 00:26:21.923 1616.058 - 1623.505: 5.0706% ( 187) 00:26:21.923 1623.505 - 1630.953: 5.4537% ( 205) 00:26:21.923 1630.953 - 1638.400: 5.8443% ( 209) 00:26:21.923 1638.400 - 1645.847: 6.2331% ( 208) 00:26:21.923 1645.847 - 1653.295: 6.6199% ( 207) 00:26:21.923 1653.295 - 1660.742: 7.0741% ( 243) 00:26:21.923 1660.742 - 1668.189: 7.4834% ( 219) 00:26:21.923 1668.189 - 1675.636: 7.9264% ( 237) 00:26:21.923 1675.636 - 1683.084: 8.3170% ( 209) 00:26:21.923 1683.084 - 1690.531: 8.7711% ( 243) 00:26:21.923 1690.531 - 1697.978: 9.2347% ( 248) 00:26:21.923 1697.978 - 1705.425: 9.6402% ( 217) 00:26:21.923 1705.425 - 1712.873: 10.0944% ( 243) 00:26:21.923 1712.873 - 1720.320: 10.5504% ( 244) 00:26:21.923 1720.320 - 1727.767: 10.9597% ( 219) 00:26:21.923 1727.767 - 1735.215: 11.4326% ( 253) 00:26:21.923 1735.215 - 1742.662: 11.8774% ( 238) 00:26:21.923 1742.662 - 1750.109: 12.3652% ( 261) 00:26:21.923 1750.109 - 1757.556: 12.8044% ( 235) 00:26:21.923 1757.556 - 1765.004: 13.2698% ( 249) 00:26:21.923 1765.004 - 1772.451: 13.7389% ( 251) 00:26:21.923 1772.451 - 1779.898: 14.1688% ( 230) 00:26:21.923 1779.898 - 1787.345: 14.6454% ( 255) 00:26:21.923 1787.345 - 1794.793: 15.1089% ( 248) 00:26:21.923 1794.793 - 1802.240: 15.5985% ( 262) 00:26:21.923 1802.240 - 1809.687: 16.0097% ( 220) 00:26:21.923 1809.687 - 1817.135: 16.4826% ( 253) 00:26:21.923 1817.135 - 1824.582: 16.9517% ( 251) 00:26:21.923 1824.582 - 1832.029: 17.4152% ( 248) 00:26:21.923 1832.029 - 1839.476: 17.8563% ( 236) 00:26:21.923 1839.476 - 1846.924: 18.3572% ( 268) 00:26:21.923 1846.924 - 1854.371: 18.7814% ( 227) 00:26:21.923 1854.371 - 1861.818: 19.2468% ( 249) 00:26:21.923 1861.818 - 1869.265: 19.6785% ( 231) 00:26:21.923 1869.265 - 1876.713: 20.1813% ( 269) 00:26:21.923 1876.713 - 1884.160: 20.6261% ( 238) 00:26:21.923 1884.160 - 1891.607: 21.0728% ( 239) 00:26:21.923 1891.607 - 1899.055: 21.5494% ( 255) 00:26:21.923 1899.055 - 1906.502: 22.0073% ( 245) 00:26:21.923 1906.502 - 1921.396: 22.9137% ( 485) 00:26:21.923 1921.396 - 1936.291: 23.8725% ( 513) 00:26:21.923 1936.291 - 1951.185: 24.8052% ( 499) 00:26:21.923 1951.185 - 1966.080: 25.7509% ( 506) 00:26:21.923 1966.080 - 1980.975: 26.6816% ( 498) 00:26:21.923 1980.975 - 1995.869: 27.5825% ( 482) 00:26:21.923 1995.869 - 2010.764: 28.5506% ( 518) 00:26:21.923 2010.764 - 2025.658: 29.4552% ( 484) 00:26:21.923 2025.658 - 2040.553: 30.4140% ( 513) 00:26:21.923 2040.553 - 2055.447: 31.3429% ( 497) 00:26:21.923 2055.447 - 2070.342: 32.2848% ( 504) 00:26:21.923 2070.342 - 2085.236: 33.2044% ( 492) 00:26:21.923 2085.236 - 2100.131: 34.1426% ( 502) 00:26:21.923 2100.131 - 2115.025: 35.0846% ( 504) 00:26:21.923 2115.025 - 2129.920: 36.0041% ( 492) 00:26:21.923 2129.920 - 2144.815: 36.9237% ( 492) 00:26:21.923 2144.815 - 2159.709: 37.8338% ( 487) 00:26:21.923 2159.709 - 2174.604: 38.7702% ( 501) 00:26:21.923 2174.604 - 2189.498: 39.6972% ( 496) 00:26:21.923 2189.498 - 2204.393: 40.6093% ( 488) 00:26:21.923 2204.393 - 2219.287: 41.5550% ( 506) 00:26:21.923 2219.287 - 2234.182: 42.4652% ( 487) 00:26:21.923 2234.182 - 2249.076: 43.4165% ( 509) 00:26:21.923 2249.076 - 2263.971: 44.3473% ( 498) 00:26:21.923 2263.971 - 2278.865: 45.2930% ( 506) 00:26:21.923 2278.865 - 2293.760: 46.2312% ( 502) 00:26:21.923 2293.760 - 2308.655: 47.1489% ( 491) 00:26:21.923 2308.655 - 2323.549: 48.0871% ( 502) 00:26:21.923 2323.549 - 2338.444: 49.0178% ( 498) 00:26:21.923 2338.444 - 2353.338: 49.9411% ( 494) 00:26:21.923 2353.338 - 2368.233: 50.8887% ( 507) 00:26:21.923 2368.233 - 2383.127: 51.8138% ( 495) 00:26:21.923 2383.127 - 2398.022: 52.7483% ( 500) 00:26:21.923 2398.022 - 2412.916: 53.6903% ( 504) 00:26:21.923 2412.916 - 2427.811: 54.5893% ( 481) 00:26:21.923 2427.811 - 2442.705: 55.5070% ( 491) 00:26:21.923 2442.705 - 2457.600: 56.4433% ( 501) 00:26:21.923 2457.600 - 2472.495: 57.3647% ( 493) 00:26:21.923 2472.495 - 2487.389: 58.3123% ( 507) 00:26:21.923 2487.389 - 2502.284: 59.2244% ( 488) 00:26:21.923 2502.284 - 2517.178: 60.1850% ( 514) 00:26:21.923 2517.178 - 2532.073: 61.1158% ( 498) 00:26:21.923 2532.073 - 2546.967: 62.0596% ( 505) 00:26:21.923 2546.967 - 2561.862: 62.9885% ( 497) 00:26:21.923 2561.862 - 2576.756: 63.9174% ( 497) 00:26:21.923 2576.756 - 2591.651: 64.8706% ( 510) 00:26:21.923 2591.651 - 2606.545: 65.7901% ( 492) 00:26:21.923 2606.545 - 2621.440: 66.7433% ( 510) 00:26:21.923 2621.440 - 2636.335: 67.6628% ( 492) 00:26:21.923 2636.335 - 2651.229: 68.5917% ( 497) 00:26:21.923 2651.229 - 2666.124: 69.5430% ( 509) 00:26:21.923 2666.124 - 2681.018: 70.4476% ( 484) 00:26:21.923 2681.018 - 2695.913: 71.3765% ( 497) 00:26:21.923 2695.913 - 2710.807: 72.2904% ( 489) 00:26:21.923 2710.807 - 2725.702: 73.2193% ( 497) 00:26:21.923 2725.702 - 2740.596: 74.1202% ( 482) 00:26:21.923 2740.596 - 2755.491: 75.0621% ( 504) 00:26:21.923 2755.491 - 2770.385: 75.9686% ( 485) 00:26:21.923 2770.385 - 2785.280: 76.8657% ( 480) 00:26:21.924 2785.280 - 2800.175: 77.7367% ( 466) 00:26:21.924 2800.175 - 2815.069: 78.6113% ( 468) 00:26:21.924 2815.069 - 2829.964: 79.4729% ( 461) 00:26:21.924 2829.964 - 2844.858: 80.2841% ( 434) 00:26:21.924 2844.858 - 2859.753: 81.0821% ( 427) 00:26:21.924 2859.753 - 2874.647: 81.8297% ( 400) 00:26:21.924 2874.647 - 2889.542: 82.5399% ( 380) 00:26:21.924 2889.542 - 2904.436: 83.2259% ( 367) 00:26:21.924 2904.436 - 2919.331: 83.8557% ( 337) 00:26:21.924 2919.331 - 2934.225: 84.4575% ( 322) 00:26:21.924 2934.225 - 2949.120: 85.0388% ( 311) 00:26:21.924 2949.120 - 2964.015: 85.6013% ( 301) 00:26:21.924 2964.015 - 2978.909: 86.1508% ( 294) 00:26:21.924 2978.909 - 2993.804: 86.6461% ( 265) 00:26:21.924 2993.804 - 3008.698: 87.1190% ( 253) 00:26:21.924 3008.698 - 3023.593: 87.5470% ( 229) 00:26:21.924 3023.593 - 3038.487: 87.9488% ( 215) 00:26:21.924 3038.487 - 3053.382: 88.3319% ( 205) 00:26:21.924 3053.382 - 3068.276: 88.6945% ( 194) 00:26:21.924 3068.276 - 3083.171: 89.0347% ( 182) 00:26:21.924 3083.171 - 3098.065: 89.3561% ( 172) 00:26:21.924 3098.065 - 3112.960: 89.6720% ( 169) 00:26:21.924 3112.960 - 3127.855: 89.9692% ( 159) 00:26:21.924 3127.855 - 3142.749: 90.2589% ( 155) 00:26:21.924 3142.749 - 3157.644: 90.5261% ( 143) 00:26:21.924 3157.644 - 3172.538: 90.8027% ( 148) 00:26:21.924 3172.538 - 3187.433: 91.0887% ( 153) 00:26:21.924 3187.433 - 3202.327: 91.3541% ( 142) 00:26:21.924 3202.327 - 3217.222: 91.6270% ( 146) 00:26:21.924 3217.222 - 3232.116: 91.8867% ( 139) 00:26:21.924 3232.116 - 3247.011: 92.1521% ( 142) 00:26:21.924 3247.011 - 3261.905: 92.4026% ( 134) 00:26:21.924 3261.905 - 3276.800: 92.6605% ( 138) 00:26:21.924 3276.800 - 3291.695: 92.9222% ( 140) 00:26:21.924 3291.695 - 3306.589: 93.1913% ( 144) 00:26:21.924 3306.589 - 3321.484: 93.4511% ( 139) 00:26:21.924 3321.484 - 3336.378: 93.7183% ( 143) 00:26:21.924 3336.378 - 3351.273: 93.9875% ( 144) 00:26:21.924 3351.273 - 3366.167: 94.2510% ( 141) 00:26:21.924 3366.167 - 3381.062: 94.5164% ( 142) 00:26:21.924 3381.062 - 3395.956: 94.7855% ( 144) 00:26:21.924 3395.956 - 3410.851: 95.0416% ( 137) 00:26:21.924 3410.851 - 3425.745: 95.3051% ( 141) 00:26:21.924 3425.745 - 3440.640: 95.5780% ( 146) 00:26:21.924 3440.640 - 3455.535: 95.8228% ( 131) 00:26:21.924 3455.535 - 3470.429: 96.0863% ( 141) 00:26:21.924 3470.429 - 3485.324: 96.3461% ( 139) 00:26:21.924 3485.324 - 3500.218: 96.6003% ( 136) 00:26:21.924 3500.218 - 3515.113: 96.8414% ( 129) 00:26:21.924 3515.113 - 3530.007: 97.0844% ( 130) 00:26:21.924 3530.007 - 3544.902: 97.3143% ( 123) 00:26:21.924 3544.902 - 3559.796: 97.5479% ( 125) 00:26:21.924 3559.796 - 3574.691: 97.7628% ( 115) 00:26:21.924 3574.691 - 3589.585: 97.9591% ( 105) 00:26:21.924 3589.585 - 3604.480: 98.1460% ( 100) 00:26:21.924 3604.480 - 3619.375: 98.3011% ( 83) 00:26:21.924 3619.375 - 3634.269: 98.4469% ( 78) 00:26:21.924 3634.269 - 3649.164: 98.5870% ( 75) 00:26:21.924 3649.164 - 3664.058: 98.7104% ( 66) 00:26:21.924 3664.058 - 3678.953: 98.8151% ( 56) 00:26:21.924 3678.953 - 3693.847: 98.9160% ( 54) 00:26:21.924 3693.847 - 3708.742: 99.0038% ( 47) 00:26:21.924 3708.742 - 3723.636: 99.0823% ( 42) 00:26:21.924 3723.636 - 3738.531: 99.1552% ( 39) 00:26:21.924 3738.531 - 3753.425: 99.2038% ( 26) 00:26:21.924 3753.425 - 3768.320: 99.2449% ( 22) 00:26:21.924 3768.320 - 3783.215: 99.2823% ( 20) 00:26:21.924 3783.215 - 3798.109: 99.3085% ( 14) 00:26:21.924 3798.109 - 3813.004: 99.3346% ( 14) 00:26:21.924 3813.004 - 3842.793: 99.3758% ( 22) 00:26:21.924 3842.793 - 3872.582: 99.4094% ( 18) 00:26:21.924 3872.582 - 3902.371: 99.4374% ( 15) 00:26:21.924 3902.371 - 3932.160: 99.4692% ( 17) 00:26:21.924 3932.160 - 3961.949: 99.4972% ( 15) 00:26:21.924 3961.949 - 3991.738: 99.5197% ( 12) 00:26:21.924 3991.738 - 4021.527: 99.5402% ( 11) 00:26:21.924 4021.527 - 4051.316: 99.5627% ( 12) 00:26:21.924 4051.316 - 4081.105: 99.5851% ( 12) 00:26:21.924 4081.105 - 4110.895: 99.6075% ( 12) 00:26:21.924 4110.895 - 4140.684: 99.6281% ( 11) 00:26:21.924 4140.684 - 4170.473: 99.6505% ( 12) 00:26:21.924 4170.473 - 4200.262: 99.6692% ( 10) 00:26:21.924 4200.262 - 4230.051: 99.6897% ( 11) 00:26:21.924 4230.051 - 4259.840: 99.7084% ( 10) 00:26:21.924 4259.840 - 4289.629: 99.7271% ( 10) 00:26:21.924 4289.629 - 4319.418: 99.7477% ( 11) 00:26:21.924 4319.418 - 4349.207: 99.7664% ( 10) 00:26:21.924 4349.207 - 4378.996: 99.7888% ( 12) 00:26:21.924 4378.996 - 4408.785: 99.8094% ( 11) 00:26:21.924 4408.785 - 4438.575: 99.8224% ( 7) 00:26:21.924 4438.575 - 4468.364: 99.8337% ( 6) 00:26:21.924 4468.364 - 4498.153: 99.8486% ( 8) 00:26:21.924 4498.153 - 4527.942: 99.8580% ( 5) 00:26:21.924 4527.942 - 4557.731: 99.8710% ( 7) 00:26:21.924 4557.731 - 4587.520: 99.8841% ( 7) 00:26:21.924 4587.520 - 4617.309: 99.8991% ( 8) 00:26:21.924 4617.309 - 4647.098: 99.9122% ( 7) 00:26:21.924 4647.098 - 4676.887: 99.9215% ( 5) 00:26:21.924 4676.887 - 4706.676: 99.9308% ( 5) 00:26:21.924 4706.676 - 4736.465: 99.9402% ( 5) 00:26:21.924 4736.465 - 4766.255: 99.9439% ( 2) 00:26:21.924 4766.255 - 4796.044: 99.9514% ( 4) 00:26:21.924 4796.044 - 4825.833: 99.9570% ( 3) 00:26:21.924 4825.833 - 4855.622: 99.9626% ( 3) 00:26:21.924 4855.622 - 4885.411: 99.9664% ( 2) 00:26:21.924 4885.411 - 4915.200: 99.9682% ( 1) 00:26:21.924 4915.200 - 4944.989: 99.9720% ( 2) 00:26:21.924 4944.989 - 4974.778: 99.9738% ( 1) 00:26:21.924 4974.778 - 5004.567: 99.9776% ( 2) 00:26:21.924 5004.567 - 5034.356: 99.9794% ( 1) 00:26:21.924 5034.356 - 5064.145: 99.9832% ( 2) 00:26:21.924 5064.145 - 5093.935: 99.9850% ( 1) 00:26:21.924 5093.935 - 5123.724: 99.9888% ( 2) 00:26:21.924 5123.724 - 5153.513: 99.9925% ( 2) 00:26:21.924 5153.513 - 5183.302: 99.9944% ( 1) 00:26:21.924 5183.302 - 5213.091: 99.9981% ( 2) 00:26:21.924 5272.669 - 5302.458: 100.0000% ( 1) 00:26:21.924 00:26:21.924 14:28:13 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:26:22.880 Initializing NVMe Controllers 00:26:22.880 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:22.880 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:22.880 Initialization complete. Launching workers. 00:26:22.880 ======================================================== 00:26:22.880 Latency(us) 00:26:22.880 Device Information : IOPS MiB/s Average min max 00:26:22.880 PCIE (0000:00:06.0) NSID 1 from core 0: 56055.94 656.91 2285.05 1200.69 7626.21 00:26:22.880 ======================================================== 00:26:22.880 Total : 56055.94 656.91 2285.05 1200.69 7626.21 00:26:22.880 00:26:22.880 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:22.880 ================================================================================= 00:26:22.880 1.00000% : 1712.873us 00:26:22.880 10.00000% : 1951.185us 00:26:22.880 25.00000% : 2085.236us 00:26:22.880 50.00000% : 2249.076us 00:26:22.880 75.00000% : 2427.811us 00:26:22.880 90.00000% : 2710.807us 00:26:22.880 95.00000% : 2904.436us 00:26:22.880 98.00000% : 3127.855us 00:26:22.880 99.00000% : 3291.695us 00:26:22.880 99.50000% : 3559.796us 00:26:22.880 99.90000% : 4468.364us 00:26:22.880 99.99000% : 7626.007us 00:26:22.880 99.99900% : 7685.585us 00:26:22.880 99.99990% : 7685.585us 00:26:22.880 99.99999% : 7685.585us 00:26:22.880 00:26:22.880 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:26:22.880 ============================================================================== 00:26:22.880 Range in us Cumulative IO count 00:26:22.880 1199.011 - 1206.458: 0.0018% ( 1) 00:26:22.880 1228.800 - 1236.247: 0.0036% ( 1) 00:26:22.880 1251.142 - 1258.589: 0.0054% ( 1) 00:26:22.880 1273.484 - 1280.931: 0.0071% ( 1) 00:26:22.880 1280.931 - 1288.378: 0.0161% ( 5) 00:26:22.880 1288.378 - 1295.825: 0.0196% ( 2) 00:26:22.880 1295.825 - 1303.273: 0.0232% ( 2) 00:26:22.880 1310.720 - 1318.167: 0.0268% ( 2) 00:26:22.880 1318.167 - 1325.615: 0.0339% ( 4) 00:26:22.880 1325.615 - 1333.062: 0.0357% ( 1) 00:26:22.880 1333.062 - 1340.509: 0.0375% ( 1) 00:26:22.880 1340.509 - 1347.956: 0.0410% ( 2) 00:26:22.880 1347.956 - 1355.404: 0.0446% ( 2) 00:26:22.880 1355.404 - 1362.851: 0.0553% ( 6) 00:26:22.880 1362.851 - 1370.298: 0.0589% ( 2) 00:26:22.880 1377.745 - 1385.193: 0.0642% ( 3) 00:26:22.880 1385.193 - 1392.640: 0.0678% ( 2) 00:26:22.880 1392.640 - 1400.087: 0.0749% ( 4) 00:26:22.880 1400.087 - 1407.535: 0.0785% ( 2) 00:26:22.880 1407.535 - 1414.982: 0.1017% ( 13) 00:26:22.880 1414.982 - 1422.429: 0.1213% ( 11) 00:26:22.880 1422.429 - 1429.876: 0.1463% ( 14) 00:26:22.880 1429.876 - 1437.324: 0.1748% ( 16) 00:26:22.880 1437.324 - 1444.771: 0.1998% ( 14) 00:26:22.880 1444.771 - 1452.218: 0.2355% ( 20) 00:26:22.880 1452.218 - 1459.665: 0.2658% ( 17) 00:26:22.880 1459.665 - 1467.113: 0.2819% ( 9) 00:26:22.880 1467.113 - 1474.560: 0.2890% ( 4) 00:26:22.880 1474.560 - 1482.007: 0.3015% ( 7) 00:26:22.880 1482.007 - 1489.455: 0.3068% ( 3) 00:26:22.880 1489.455 - 1496.902: 0.3175% ( 6) 00:26:22.880 1496.902 - 1504.349: 0.3265% ( 5) 00:26:22.881 1504.349 - 1511.796: 0.3407% ( 8) 00:26:22.881 1511.796 - 1519.244: 0.3497% ( 5) 00:26:22.881 1519.244 - 1526.691: 0.3639% ( 8) 00:26:22.881 1526.691 - 1534.138: 0.3764% ( 7) 00:26:22.881 1534.138 - 1541.585: 0.3996% ( 13) 00:26:22.881 1541.585 - 1549.033: 0.4085% ( 5) 00:26:22.881 1549.033 - 1556.480: 0.4353% ( 15) 00:26:22.881 1556.480 - 1563.927: 0.4906% ( 31) 00:26:22.881 1563.927 - 1571.375: 0.5066% ( 9) 00:26:22.881 1571.375 - 1578.822: 0.5191% ( 7) 00:26:22.881 1578.822 - 1586.269: 0.5352% ( 9) 00:26:22.881 1586.269 - 1593.716: 0.5477% ( 7) 00:26:22.881 1593.716 - 1601.164: 0.5619% ( 8) 00:26:22.881 1601.164 - 1608.611: 0.5833% ( 12) 00:26:22.881 1608.611 - 1616.058: 0.5905% ( 4) 00:26:22.881 1616.058 - 1623.505: 0.6101% ( 11) 00:26:22.881 1623.505 - 1630.953: 0.6333% ( 13) 00:26:22.881 1630.953 - 1638.400: 0.6529% ( 11) 00:26:22.881 1638.400 - 1645.847: 0.6708% ( 10) 00:26:22.881 1645.847 - 1653.295: 0.6993% ( 16) 00:26:22.881 1653.295 - 1660.742: 0.7261% ( 15) 00:26:22.881 1660.742 - 1668.189: 0.7493% ( 13) 00:26:22.881 1668.189 - 1675.636: 0.7992% ( 28) 00:26:22.881 1675.636 - 1683.084: 0.8349% ( 20) 00:26:22.881 1683.084 - 1690.531: 0.8955% ( 34) 00:26:22.881 1690.531 - 1697.978: 0.9419% ( 26) 00:26:22.881 1697.978 - 1705.425: 0.9954% ( 30) 00:26:22.881 1705.425 - 1712.873: 1.0382% ( 24) 00:26:22.881 1712.873 - 1720.320: 1.1096% ( 40) 00:26:22.881 1720.320 - 1727.767: 1.1703% ( 34) 00:26:22.881 1727.767 - 1735.215: 1.2559% ( 48) 00:26:22.881 1735.215 - 1742.662: 1.3522% ( 54) 00:26:22.881 1742.662 - 1750.109: 1.4503% ( 55) 00:26:22.881 1750.109 - 1757.556: 1.5627% ( 63) 00:26:22.881 1757.556 - 1765.004: 1.7161% ( 86) 00:26:22.881 1765.004 - 1772.451: 1.8535% ( 77) 00:26:22.881 1772.451 - 1779.898: 2.0319% ( 100) 00:26:22.881 1779.898 - 1787.345: 2.2174% ( 104) 00:26:22.881 1787.345 - 1794.793: 2.4172% ( 112) 00:26:22.881 1794.793 - 1802.240: 2.5867% ( 95) 00:26:22.881 1802.240 - 1809.687: 2.8168% ( 129) 00:26:22.881 1809.687 - 1817.135: 3.0612% ( 137) 00:26:22.881 1817.135 - 1824.582: 3.3038% ( 136) 00:26:22.881 1824.582 - 1832.029: 3.5411% ( 133) 00:26:22.881 1832.029 - 1839.476: 3.8105% ( 151) 00:26:22.881 1839.476 - 1846.924: 4.1494% ( 190) 00:26:22.881 1846.924 - 1854.371: 4.4652% ( 177) 00:26:22.881 1854.371 - 1861.818: 4.8095% ( 193) 00:26:22.881 1861.818 - 1869.265: 5.1805% ( 208) 00:26:22.881 1869.265 - 1876.713: 5.5926% ( 231) 00:26:22.881 1876.713 - 1884.160: 6.0493% ( 256) 00:26:22.881 1884.160 - 1891.607: 6.4757% ( 239) 00:26:22.881 1891.607 - 1899.055: 6.9252% ( 252) 00:26:22.881 1899.055 - 1906.502: 7.4194% ( 277) 00:26:22.881 1906.502 - 1921.396: 8.5878% ( 655) 00:26:22.881 1921.396 - 1936.291: 9.7117% ( 630) 00:26:22.881 1936.291 - 1951.185: 11.1103% ( 784) 00:26:22.881 1951.185 - 1966.080: 12.5535% ( 809) 00:26:22.881 1966.080 - 1980.975: 14.2126% ( 930) 00:26:22.881 1980.975 - 1995.869: 15.7610% ( 868) 00:26:22.881 1995.869 - 2010.764: 17.4344% ( 938) 00:26:22.881 2010.764 - 2025.658: 19.1327% ( 952) 00:26:22.881 2025.658 - 2040.553: 21.0361% ( 1067) 00:26:22.881 2040.553 - 2055.447: 22.8343% ( 1008) 00:26:22.881 2055.447 - 2070.342: 24.8484% ( 1129) 00:26:22.881 2070.342 - 2085.236: 26.8910% ( 1145) 00:26:22.881 2085.236 - 2100.131: 28.8051% ( 1073) 00:26:22.881 2100.131 - 2115.025: 30.8370% ( 1139) 00:26:22.881 2115.025 - 2129.920: 32.9670% ( 1194) 00:26:22.881 2129.920 - 2144.815: 35.2291% ( 1268) 00:26:22.881 2144.815 - 2159.709: 37.2485% ( 1132) 00:26:22.881 2159.709 - 2174.604: 39.5622% ( 1297) 00:26:22.881 2174.604 - 2189.498: 42.0740% ( 1408) 00:26:22.881 2189.498 - 2204.393: 44.4288% ( 1320) 00:26:22.881 2204.393 - 2219.287: 46.8193% ( 1340) 00:26:22.881 2219.287 - 2234.182: 49.3756% ( 1433) 00:26:22.881 2234.182 - 2249.076: 51.6876% ( 1296) 00:26:22.881 2249.076 - 2263.971: 53.9835% ( 1287) 00:26:22.881 2263.971 - 2278.865: 56.3098% ( 1304) 00:26:22.881 2278.865 - 2293.760: 58.7609% ( 1374) 00:26:22.881 2293.760 - 2308.655: 61.2798% ( 1412) 00:26:22.881 2308.655 - 2323.549: 63.5151% ( 1253) 00:26:22.881 2323.549 - 2338.444: 65.4917% ( 1108) 00:26:22.881 2338.444 - 2353.338: 67.2631% ( 993) 00:26:22.881 2353.338 - 2368.233: 69.0399% ( 996) 00:26:22.881 2368.233 - 2383.127: 70.6597% ( 908) 00:26:22.881 2383.127 - 2398.022: 72.2866% ( 912) 00:26:22.881 2398.022 - 2412.916: 73.9403% ( 927) 00:26:22.881 2412.916 - 2427.811: 75.3907% ( 813) 00:26:22.881 2427.811 - 2442.705: 76.6287% ( 694) 00:26:22.881 2442.705 - 2457.600: 77.8507% ( 685) 00:26:22.881 2457.600 - 2472.495: 79.0156% ( 653) 00:26:22.881 2472.495 - 2487.389: 80.1252% ( 622) 00:26:22.881 2487.389 - 2502.284: 81.1421% ( 570) 00:26:22.881 2502.284 - 2517.178: 82.0697% ( 520) 00:26:22.881 2517.178 - 2532.073: 82.8796% ( 454) 00:26:22.881 2532.073 - 2546.967: 83.6645% ( 440) 00:26:22.881 2546.967 - 2561.862: 84.3870% ( 405) 00:26:22.881 2561.862 - 2576.756: 85.0703% ( 383) 00:26:22.881 2576.756 - 2591.651: 85.7303% ( 370) 00:26:22.881 2591.651 - 2606.545: 86.4082% ( 380) 00:26:22.881 2606.545 - 2621.440: 87.0201% ( 343) 00:26:22.881 2621.440 - 2636.335: 87.6516% ( 354) 00:26:22.881 2636.335 - 2651.229: 88.1993% ( 307) 00:26:22.881 2651.229 - 2666.124: 88.7345% ( 300) 00:26:22.881 2666.124 - 2681.018: 89.2946% ( 314) 00:26:22.881 2681.018 - 2695.913: 89.7460% ( 253) 00:26:22.881 2695.913 - 2710.807: 90.2080% ( 259) 00:26:22.881 2710.807 - 2725.702: 90.7111% ( 282) 00:26:22.881 2725.702 - 2740.596: 91.2017% ( 275) 00:26:22.881 2740.596 - 2755.491: 91.6155% ( 232) 00:26:22.881 2755.491 - 2770.385: 92.0758% ( 258) 00:26:22.881 2770.385 - 2785.280: 92.4397% ( 204) 00:26:22.881 2785.280 - 2800.175: 92.8340% ( 221) 00:26:22.881 2800.175 - 2815.069: 93.2175% ( 215) 00:26:22.881 2815.069 - 2829.964: 93.5600% ( 192) 00:26:22.881 2829.964 - 2844.858: 93.9061% ( 194) 00:26:22.881 2844.858 - 2859.753: 94.2308% ( 182) 00:26:22.881 2859.753 - 2874.647: 94.5590% ( 184) 00:26:22.881 2874.647 - 2889.542: 94.8605% ( 169) 00:26:22.881 2889.542 - 2904.436: 95.1352% ( 154) 00:26:22.881 2904.436 - 2919.331: 95.4224% ( 161) 00:26:22.881 2919.331 - 2934.225: 95.6954% ( 153) 00:26:22.881 2934.225 - 2949.120: 95.9344% ( 134) 00:26:22.881 2949.120 - 2964.015: 96.1753% ( 135) 00:26:22.881 2964.015 - 2978.909: 96.3858% ( 118) 00:26:22.881 2978.909 - 2993.804: 96.6070% ( 124) 00:26:22.881 2993.804 - 3008.698: 96.7978% ( 107) 00:26:22.881 3008.698 - 3023.593: 96.9780% ( 101) 00:26:22.881 3023.593 - 3038.487: 97.1528% ( 98) 00:26:22.881 3038.487 - 3053.382: 97.3080% ( 87) 00:26:22.881 3053.382 - 3068.276: 97.4633% ( 87) 00:26:22.881 3068.276 - 3083.171: 97.5917% ( 72) 00:26:22.881 3083.171 - 3098.065: 97.7451% ( 86) 00:26:22.881 3098.065 - 3112.960: 97.8825% ( 77) 00:26:22.881 3112.960 - 3127.855: 98.0056% ( 69) 00:26:22.881 3127.855 - 3142.749: 98.1251% ( 67) 00:26:22.881 3142.749 - 3157.644: 98.2571% ( 74) 00:26:22.881 3157.644 - 3172.538: 98.3588% ( 57) 00:26:22.881 3172.538 - 3187.433: 98.4587% ( 56) 00:26:22.881 3187.433 - 3202.327: 98.5443% ( 48) 00:26:22.881 3202.327 - 3217.222: 98.6282% ( 47) 00:26:22.881 3217.222 - 3232.116: 98.7102% ( 46) 00:26:22.881 3232.116 - 3247.011: 98.7869% ( 43) 00:26:22.881 3247.011 - 3261.905: 98.8619% ( 42) 00:26:22.881 3261.905 - 3276.800: 98.9332% ( 40) 00:26:22.881 3276.800 - 3291.695: 99.0010% ( 38) 00:26:22.881 3291.695 - 3306.589: 99.0581% ( 32) 00:26:22.881 3306.589 - 3321.484: 99.1063% ( 27) 00:26:22.881 3321.484 - 3336.378: 99.1508% ( 25) 00:26:22.881 3336.378 - 3351.273: 99.1865% ( 20) 00:26:22.881 3351.273 - 3366.167: 99.2151% ( 16) 00:26:22.881 3366.167 - 3381.062: 99.2561% ( 23) 00:26:22.881 3381.062 - 3395.956: 99.2829% ( 15) 00:26:22.881 3395.956 - 3410.851: 99.3061% ( 13) 00:26:22.881 3410.851 - 3425.745: 99.3292% ( 13) 00:26:22.881 3425.745 - 3440.640: 99.3489% ( 11) 00:26:22.881 3440.640 - 3455.535: 99.3667% ( 10) 00:26:22.881 3455.535 - 3470.429: 99.3845% ( 10) 00:26:22.881 3470.429 - 3485.324: 99.4060% ( 12) 00:26:22.881 3485.324 - 3500.218: 99.4274% ( 12) 00:26:22.881 3500.218 - 3515.113: 99.4452% ( 10) 00:26:22.881 3515.113 - 3530.007: 99.4648% ( 11) 00:26:22.881 3530.007 - 3544.902: 99.4827% ( 10) 00:26:22.881 3544.902 - 3559.796: 99.5005% ( 10) 00:26:22.881 3559.796 - 3574.691: 99.5148% ( 8) 00:26:22.881 3574.691 - 3589.585: 99.5308% ( 9) 00:26:22.881 3589.585 - 3604.480: 99.5522% ( 12) 00:26:22.881 3604.480 - 3619.375: 99.5701% ( 10) 00:26:22.881 3619.375 - 3634.269: 99.5843% ( 8) 00:26:22.881 3634.269 - 3649.164: 99.6004% ( 9) 00:26:22.881 3649.164 - 3664.058: 99.6182% ( 10) 00:26:22.881 3664.058 - 3678.953: 99.6343% ( 9) 00:26:22.881 3678.953 - 3693.847: 99.6468% ( 7) 00:26:22.881 3693.847 - 3708.742: 99.6628% ( 9) 00:26:22.881 3708.742 - 3723.636: 99.6771% ( 8) 00:26:22.881 3723.636 - 3738.531: 99.6914% ( 8) 00:26:22.881 3738.531 - 3753.425: 99.7021% ( 6) 00:26:22.881 3753.425 - 3768.320: 99.7181% ( 9) 00:26:22.881 3768.320 - 3783.215: 99.7306% ( 7) 00:26:22.881 3783.215 - 3798.109: 99.7449% ( 8) 00:26:22.881 3798.109 - 3813.004: 99.7538% ( 5) 00:26:22.881 3813.004 - 3842.793: 99.7663% ( 7) 00:26:22.881 3842.793 - 3872.582: 99.7788% ( 7) 00:26:22.881 3872.582 - 3902.371: 99.7913% ( 7) 00:26:22.881 3902.371 - 3932.160: 99.8038% ( 7) 00:26:22.881 3932.160 - 3961.949: 99.8163% ( 7) 00:26:22.882 3961.949 - 3991.738: 99.8234% ( 4) 00:26:22.882 3991.738 - 4021.527: 99.8323% ( 5) 00:26:22.882 4021.527 - 4051.316: 99.8394% ( 4) 00:26:22.882 4051.316 - 4081.105: 99.8448% ( 3) 00:26:22.882 4081.105 - 4110.895: 99.8484% ( 2) 00:26:22.882 4110.895 - 4140.684: 99.8519% ( 2) 00:26:22.882 4140.684 - 4170.473: 99.8555% ( 2) 00:26:22.882 4170.473 - 4200.262: 99.8591% ( 2) 00:26:22.882 4200.262 - 4230.051: 99.8644% ( 3) 00:26:22.882 4230.051 - 4259.840: 99.8662% ( 1) 00:26:22.882 4259.840 - 4289.629: 99.8716% ( 3) 00:26:22.882 4289.629 - 4319.418: 99.8769% ( 3) 00:26:23.149 4319.418 - 4349.207: 99.8805% ( 2) 00:26:23.149 4349.207 - 4378.996: 99.8876% ( 4) 00:26:23.149 4378.996 - 4408.785: 99.8930% ( 3) 00:26:23.149 4408.785 - 4438.575: 99.8965% ( 2) 00:26:23.149 4438.575 - 4468.364: 99.9001% ( 2) 00:26:23.149 4468.364 - 4498.153: 99.9037% ( 2) 00:26:23.149 4498.153 - 4527.942: 99.9072% ( 2) 00:26:23.149 4527.942 - 4557.731: 99.9108% ( 2) 00:26:23.149 4557.731 - 4587.520: 99.9144% ( 2) 00:26:23.149 4587.520 - 4617.309: 99.9162% ( 1) 00:26:23.149 4617.309 - 4647.098: 99.9197% ( 2) 00:26:23.149 4647.098 - 4676.887: 99.9233% ( 2) 00:26:23.149 4676.887 - 4706.676: 99.9269% ( 2) 00:26:23.149 4706.676 - 4736.465: 99.9304% ( 2) 00:26:23.149 4736.465 - 4766.255: 99.9358% ( 3) 00:26:23.149 4766.255 - 4796.044: 99.9376% ( 1) 00:26:23.149 4796.044 - 4825.833: 99.9393% ( 1) 00:26:23.149 4944.989 - 4974.778: 99.9411% ( 1) 00:26:23.149 5004.567 - 5034.356: 99.9429% ( 1) 00:26:23.149 5242.880 - 5272.669: 99.9447% ( 1) 00:26:23.149 5272.669 - 5302.458: 99.9572% ( 7) 00:26:23.149 5302.458 - 5332.247: 99.9643% ( 4) 00:26:23.149 5332.247 - 5362.036: 99.9661% ( 1) 00:26:23.149 6583.389 - 6613.178: 99.9679% ( 1) 00:26:23.149 6762.124 - 6791.913: 99.9839% ( 9) 00:26:23.149 7119.593 - 7149.382: 99.9857% ( 1) 00:26:23.149 7149.382 - 7179.171: 99.9875% ( 1) 00:26:23.149 7596.218 - 7626.007: 99.9982% ( 6) 00:26:23.149 7626.007 - 7685.585: 100.0000% ( 1) 00:26:23.149 00:26:23.149 14:28:14 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:26:23.149 00:26:23.149 real 0m2.540s 00:26:23.149 user 0m2.218s 00:26:23.149 sys 0m0.187s 00:26:23.149 14:28:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:23.149 ************************************ 00:26:23.149 END TEST nvme_perf 00:26:23.149 ************************************ 00:26:23.149 14:28:14 -- common/autotest_common.sh@10 -- # set +x 00:26:23.149 14:28:14 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:26:23.149 14:28:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:23.149 14:28:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.149 14:28:14 -- common/autotest_common.sh@10 -- # set +x 00:26:23.149 ************************************ 00:26:23.149 START TEST nvme_hello_world 00:26:23.149 ************************************ 00:26:23.150 14:28:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:26:23.408 Initializing NVMe Controllers 00:26:23.408 Attached to 0000:00:06.0 00:26:23.408 Namespace ID: 1 size: 5GB 00:26:23.408 Initialization complete. 00:26:23.408 INFO: using host memory buffer for IO 00:26:23.408 Hello world! 00:26:23.408 00:26:23.408 real 0m0.252s 00:26:23.408 user 0m0.097s 00:26:23.408 sys 0m0.097s 00:26:23.408 14:28:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:23.408 ************************************ 00:26:23.408 END TEST nvme_hello_world 00:26:23.408 ************************************ 00:26:23.408 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.408 14:28:15 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:26:23.408 14:28:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:23.408 14:28:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.408 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.408 ************************************ 00:26:23.408 START TEST nvme_sgl 00:26:23.408 ************************************ 00:26:23.408 14:28:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:26:23.666 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:26:23.666 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:26:23.666 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:26:23.666 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:26:23.666 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:26:23.666 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:26:23.666 NVMe Readv/Writev Request test 00:26:23.666 Attached to 0000:00:06.0 00:26:23.666 0000:00:06.0: build_io_request_2 test passed 00:26:23.666 0000:00:06.0: build_io_request_4 test passed 00:26:23.666 0000:00:06.0: build_io_request_5 test passed 00:26:23.666 0000:00:06.0: build_io_request_6 test passed 00:26:23.666 0000:00:06.0: build_io_request_7 test passed 00:26:23.666 0000:00:06.0: build_io_request_10 test passed 00:26:23.666 Cleaning up... 00:26:23.666 00:26:23.666 real 0m0.302s 00:26:23.666 user 0m0.154s 00:26:23.666 sys 0m0.080s 00:26:23.666 14:28:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:23.666 ************************************ 00:26:23.666 END TEST nvme_sgl 00:26:23.666 ************************************ 00:26:23.666 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.666 14:28:15 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:26:23.666 14:28:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:23.666 14:28:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.666 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.666 ************************************ 00:26:23.666 START TEST nvme_e2edp 00:26:23.666 ************************************ 00:26:23.667 14:28:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:26:23.925 NVMe Write/Read with End-to-End data protection test 00:26:23.925 Attached to 0000:00:06.0 00:26:23.925 Cleaning up... 00:26:23.925 00:26:23.925 real 0m0.294s 00:26:23.925 user 0m0.085s 00:26:23.925 sys 0m0.107s 00:26:23.925 14:28:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:23.925 ************************************ 00:26:23.925 END TEST nvme_e2edp 00:26:23.925 ************************************ 00:26:23.925 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.925 14:28:15 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:26:23.925 14:28:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:23.925 14:28:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.925 14:28:15 -- common/autotest_common.sh@10 -- # set +x 00:26:23.925 ************************************ 00:26:23.925 START TEST nvme_reserve 00:26:23.925 ************************************ 00:26:23.925 14:28:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:26:24.184 ===================================================== 00:26:24.184 NVMe Controller at PCI bus 0, device 6, function 0 00:26:24.184 ===================================================== 00:26:24.184 Reservations: Not Supported 00:26:24.184 Reservation test passed 00:26:24.184 00:26:24.184 real 0m0.261s 00:26:24.184 user 0m0.079s 00:26:24.184 sys 0m0.107s 00:26:24.184 14:28:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:24.184 ************************************ 00:26:24.184 END TEST nvme_reserve 00:26:24.184 ************************************ 00:26:24.184 14:28:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.442 14:28:16 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:26:24.442 14:28:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:24.442 14:28:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.442 14:28:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.443 ************************************ 00:26:24.443 START TEST nvme_err_injection 00:26:24.443 ************************************ 00:26:24.443 14:28:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:26:24.702 NVMe Error Injection test 00:26:24.702 Attached to 0000:00:06.0 00:26:24.702 0000:00:06.0: get features failed as expected 00:26:24.702 0000:00:06.0: get features successfully as expected 00:26:24.702 0000:00:06.0: read failed as expected 00:26:24.702 0000:00:06.0: read successfully as expected 00:26:24.702 Cleaning up... 00:26:24.702 00:26:24.702 real 0m0.292s 00:26:24.702 user 0m0.102s 00:26:24.702 sys 0m0.099s 00:26:24.702 14:28:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:24.702 ************************************ 00:26:24.702 END TEST nvme_err_injection 00:26:24.702 ************************************ 00:26:24.702 14:28:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.702 14:28:16 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:26:24.702 14:28:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:26:24.702 14:28:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.702 14:28:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.702 ************************************ 00:26:24.702 START TEST nvme_overhead 00:26:24.702 ************************************ 00:26:24.702 14:28:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:26:26.081 Initializing NVMe Controllers 00:26:26.081 Attached to 0000:00:06.0 00:26:26.081 Initialization complete. Launching workers. 00:26:26.081 submit (in ns) avg, min, max = 15205.9, 10606.4, 104719.1 00:26:26.081 complete (in ns) avg, min, max = 11067.5, 7481.8, 146875.5 00:26:26.081 00:26:26.081 Submit histogram 00:26:26.081 ================ 00:26:26.081 Range in us Cumulative Count 00:26:26.081 10.589 - 10.647: 0.0125% ( 1) 00:26:26.081 10.647 - 10.705: 0.0249% ( 1) 00:26:26.081 10.705 - 10.764: 0.0374% ( 1) 00:26:26.081 10.764 - 10.822: 0.1745% ( 11) 00:26:26.081 10.822 - 10.880: 0.6108% ( 35) 00:26:26.081 10.880 - 10.938: 2.1067% ( 120) 00:26:26.081 10.938 - 10.996: 4.7494% ( 212) 00:26:26.081 10.996 - 11.055: 8.1650% ( 274) 00:26:26.081 11.055 - 11.113: 11.1444% ( 239) 00:26:26.081 11.113 - 11.171: 12.6153% ( 118) 00:26:26.081 11.171 - 11.229: 13.3009% ( 55) 00:26:26.081 11.229 - 11.287: 14.0613% ( 61) 00:26:26.081 11.287 - 11.345: 15.1085% ( 84) 00:26:26.081 11.345 - 11.404: 17.0406% ( 155) 00:26:26.081 11.404 - 11.462: 20.7305% ( 296) 00:26:26.081 11.462 - 11.520: 25.0561% ( 347) 00:26:26.081 11.520 - 11.578: 28.4842% ( 275) 00:26:26.081 11.578 - 11.636: 31.2516% ( 222) 00:26:26.081 11.636 - 11.695: 34.0065% ( 221) 00:26:26.081 11.695 - 11.753: 36.6866% ( 215) 00:26:26.081 11.753 - 11.811: 39.5537% ( 230) 00:26:26.081 11.811 - 11.869: 41.5482% ( 160) 00:26:26.081 11.869 - 11.927: 43.0940% ( 124) 00:26:26.081 11.927 - 11.985: 44.2159% ( 90) 00:26:26.081 11.985 - 12.044: 45.3129% ( 88) 00:26:26.081 12.044 - 12.102: 46.0359% ( 58) 00:26:26.081 12.102 - 12.160: 46.9085% ( 70) 00:26:26.081 12.160 - 12.218: 47.4071% ( 40) 00:26:26.081 12.218 - 12.276: 47.7063% ( 24) 00:26:26.081 12.276 - 12.335: 48.0429% ( 27) 00:26:26.081 12.335 - 12.393: 48.3919% ( 28) 00:26:26.081 12.393 - 12.451: 48.6288% ( 19) 00:26:26.081 12.451 - 12.509: 48.7659% ( 11) 00:26:26.081 12.509 - 12.567: 48.9279% ( 13) 00:26:26.081 12.567 - 12.625: 49.1149% ( 15) 00:26:26.081 12.625 - 12.684: 49.2645% ( 12) 00:26:26.081 12.684 - 12.742: 49.4141% ( 12) 00:26:26.081 12.742 - 12.800: 49.5014% ( 7) 00:26:26.081 12.800 - 12.858: 49.6759% ( 14) 00:26:26.081 12.858 - 12.916: 49.7756% ( 8) 00:26:26.081 12.916 - 12.975: 49.9377% ( 13) 00:26:26.081 12.975 - 13.033: 50.0249% ( 7) 00:26:26.081 13.033 - 13.091: 50.1247% ( 8) 00:26:26.082 13.091 - 13.149: 50.2368% ( 9) 00:26:26.082 13.149 - 13.207: 50.2867% ( 4) 00:26:26.082 13.207 - 13.265: 50.4114% ( 10) 00:26:26.082 13.265 - 13.324: 50.4737% ( 5) 00:26:26.082 13.324 - 13.382: 50.6108% ( 11) 00:26:26.082 13.382 - 13.440: 50.7230% ( 9) 00:26:26.082 13.440 - 13.498: 50.8477% ( 10) 00:26:26.082 13.498 - 13.556: 50.8851% ( 3) 00:26:26.082 13.556 - 13.615: 50.9848% ( 8) 00:26:26.082 13.615 - 13.673: 51.0471% ( 5) 00:26:26.082 13.673 - 13.731: 51.1094% ( 5) 00:26:26.082 13.731 - 13.789: 51.1468% ( 3) 00:26:26.082 13.789 - 13.847: 51.1967% ( 4) 00:26:26.082 13.847 - 13.905: 51.2092% ( 1) 00:26:26.082 13.905 - 13.964: 51.2216% ( 1) 00:26:26.082 13.964 - 14.022: 51.2590% ( 3) 00:26:26.082 14.022 - 14.080: 51.2964% ( 3) 00:26:26.082 14.080 - 14.138: 51.3214% ( 2) 00:26:26.082 14.138 - 14.196: 51.3837% ( 5) 00:26:26.082 14.196 - 14.255: 51.4336% ( 4) 00:26:26.082 14.255 - 14.313: 51.4834% ( 4) 00:26:26.082 14.313 - 14.371: 51.5333% ( 4) 00:26:26.082 14.371 - 14.429: 51.5956% ( 5) 00:26:26.082 14.429 - 14.487: 51.6205% ( 2) 00:26:26.082 14.487 - 14.545: 51.6455% ( 2) 00:26:26.082 14.545 - 14.604: 51.6579% ( 1) 00:26:26.082 14.604 - 14.662: 51.7078% ( 4) 00:26:26.082 14.662 - 14.720: 51.7327% ( 2) 00:26:26.082 14.720 - 14.778: 51.8075% ( 6) 00:26:26.082 14.778 - 14.836: 51.8449% ( 3) 00:26:26.082 14.836 - 14.895: 51.8699% ( 2) 00:26:26.082 14.895 - 15.011: 51.9571% ( 7) 00:26:26.082 15.011 - 15.127: 52.0070% ( 4) 00:26:26.082 15.127 - 15.244: 52.0693% ( 5) 00:26:26.082 15.244 - 15.360: 52.0942% ( 2) 00:26:26.082 15.360 - 15.476: 52.1441% ( 4) 00:26:26.082 15.476 - 15.593: 52.2438% ( 8) 00:26:26.082 15.593 - 15.709: 52.3685% ( 10) 00:26:26.082 15.709 - 15.825: 52.4682% ( 8) 00:26:26.082 15.825 - 15.942: 52.6677% ( 16) 00:26:26.082 15.942 - 16.058: 53.4031% ( 59) 00:26:26.082 16.058 - 16.175: 60.3465% ( 557) 00:26:26.082 16.175 - 16.291: 71.8649% ( 924) 00:26:26.082 16.291 - 16.407: 77.1379% ( 423) 00:26:26.082 16.407 - 16.524: 79.7806% ( 212) 00:26:26.082 16.524 - 16.640: 80.9399% ( 93) 00:26:26.082 16.640 - 16.756: 81.7876% ( 68) 00:26:26.082 16.756 - 16.873: 82.3236% ( 43) 00:26:26.082 16.873 - 16.989: 82.7350% ( 33) 00:26:26.082 16.989 - 17.105: 83.1090% ( 30) 00:26:26.082 17.105 - 17.222: 83.4331% ( 26) 00:26:26.082 17.222 - 17.338: 83.6574% ( 18) 00:26:26.082 17.338 - 17.455: 83.9192% ( 21) 00:26:26.082 17.455 - 17.571: 84.1685% ( 20) 00:26:26.082 17.571 - 17.687: 84.4428% ( 22) 00:26:26.082 17.687 - 17.804: 84.6547% ( 17) 00:26:26.082 17.804 - 17.920: 84.7669% ( 9) 00:26:26.082 17.920 - 18.036: 84.8666% ( 8) 00:26:26.082 18.036 - 18.153: 84.9663% ( 8) 00:26:26.082 18.153 - 18.269: 85.1533% ( 15) 00:26:26.082 18.269 - 18.385: 85.2032% ( 4) 00:26:26.082 18.385 - 18.502: 85.3029% ( 8) 00:26:26.082 18.502 - 18.618: 85.4026% ( 8) 00:26:26.082 18.618 - 18.735: 85.4774% ( 6) 00:26:26.082 18.735 - 18.851: 85.6146% ( 11) 00:26:26.082 18.851 - 18.967: 85.7517% ( 11) 00:26:26.082 18.967 - 19.084: 85.8639% ( 9) 00:26:26.082 19.084 - 19.200: 86.0010% ( 11) 00:26:26.082 19.200 - 19.316: 86.1007% ( 8) 00:26:26.082 19.316 - 19.433: 86.2628% ( 13) 00:26:26.082 19.433 - 19.549: 86.4373% ( 14) 00:26:26.082 19.549 - 19.665: 86.5744% ( 11) 00:26:26.082 19.665 - 19.782: 86.7614% ( 15) 00:26:26.082 19.782 - 19.898: 86.8861% ( 10) 00:26:26.082 19.898 - 20.015: 86.9609% ( 6) 00:26:26.082 20.015 - 20.131: 87.0357% ( 6) 00:26:26.082 20.131 - 20.247: 87.1478% ( 9) 00:26:26.082 20.247 - 20.364: 87.2725% ( 10) 00:26:26.082 20.364 - 20.480: 87.3473% ( 6) 00:26:26.082 20.480 - 20.596: 87.4720% ( 10) 00:26:26.082 20.596 - 20.713: 87.5467% ( 6) 00:26:26.082 20.713 - 20.829: 87.6839% ( 11) 00:26:26.082 20.829 - 20.945: 87.7587% ( 6) 00:26:26.082 20.945 - 21.062: 87.8833% ( 10) 00:26:26.082 21.062 - 21.178: 88.0578% ( 14) 00:26:26.082 21.178 - 21.295: 88.1576% ( 8) 00:26:26.082 21.295 - 21.411: 88.2074% ( 4) 00:26:26.082 21.411 - 21.527: 88.3196% ( 9) 00:26:26.082 21.527 - 21.644: 88.4941% ( 14) 00:26:26.082 21.644 - 21.760: 88.6562% ( 13) 00:26:26.082 21.760 - 21.876: 88.9928% ( 27) 00:26:26.082 21.876 - 21.993: 89.2296% ( 19) 00:26:26.082 21.993 - 22.109: 89.4665% ( 19) 00:26:26.082 22.109 - 22.225: 89.6784% ( 17) 00:26:26.082 22.225 - 22.342: 89.8654% ( 15) 00:26:26.082 22.342 - 22.458: 90.0025% ( 11) 00:26:26.082 22.458 - 22.575: 90.1521% ( 12) 00:26:26.082 22.575 - 22.691: 90.4887% ( 27) 00:26:26.082 22.691 - 22.807: 90.6507% ( 13) 00:26:26.082 22.807 - 22.924: 90.7504% ( 8) 00:26:26.082 22.924 - 23.040: 90.9374% ( 15) 00:26:26.082 23.040 - 23.156: 91.0122% ( 6) 00:26:26.082 23.156 - 23.273: 91.1244% ( 9) 00:26:26.082 23.273 - 23.389: 91.1867% ( 5) 00:26:26.082 23.389 - 23.505: 91.2117% ( 2) 00:26:26.082 23.505 - 23.622: 91.3114% ( 8) 00:26:26.082 23.622 - 23.738: 91.3737% ( 5) 00:26:26.082 23.738 - 23.855: 91.4111% ( 3) 00:26:26.082 23.855 - 23.971: 91.4485% ( 3) 00:26:26.082 23.971 - 24.087: 91.5108% ( 5) 00:26:26.082 24.087 - 24.204: 91.6355% ( 10) 00:26:26.082 24.204 - 24.320: 91.6978% ( 5) 00:26:26.082 24.320 - 24.436: 91.7726% ( 6) 00:26:26.082 24.436 - 24.553: 91.8599% ( 7) 00:26:26.082 24.553 - 24.669: 91.8973% ( 3) 00:26:26.082 24.669 - 24.785: 91.9596% ( 5) 00:26:26.082 24.785 - 24.902: 91.9845% ( 2) 00:26:26.082 24.902 - 25.018: 92.0718% ( 7) 00:26:26.082 25.018 - 25.135: 92.1466% ( 6) 00:26:26.082 25.135 - 25.251: 92.1715% ( 2) 00:26:26.082 25.251 - 25.367: 92.2339% ( 5) 00:26:26.082 25.367 - 25.484: 92.3336% ( 8) 00:26:26.082 25.484 - 25.600: 92.4084% ( 6) 00:26:26.082 25.600 - 25.716: 92.5704% ( 13) 00:26:26.082 25.716 - 25.833: 92.8696% ( 24) 00:26:26.082 25.833 - 25.949: 93.1937% ( 26) 00:26:26.082 25.949 - 26.065: 93.4680% ( 22) 00:26:26.082 26.065 - 26.182: 93.6674% ( 16) 00:26:26.082 26.182 - 26.298: 93.8793% ( 17) 00:26:26.082 26.298 - 26.415: 94.2408% ( 29) 00:26:26.082 26.415 - 26.531: 94.5899% ( 28) 00:26:26.082 26.531 - 26.647: 94.8891% ( 24) 00:26:26.082 26.647 - 26.764: 95.1882% ( 24) 00:26:26.082 26.764 - 26.880: 95.3628% ( 14) 00:26:26.082 26.880 - 26.996: 95.4375% ( 6) 00:26:26.082 26.996 - 27.113: 95.5497% ( 9) 00:26:26.082 27.113 - 27.229: 95.8489% ( 24) 00:26:26.082 27.229 - 27.345: 96.0608% ( 17) 00:26:26.082 27.345 - 27.462: 96.6467% ( 47) 00:26:26.082 27.462 - 27.578: 97.1827% ( 43) 00:26:26.082 27.578 - 27.695: 97.6190% ( 35) 00:26:26.082 27.695 - 27.811: 97.9681% ( 28) 00:26:26.082 27.811 - 27.927: 98.1177% ( 12) 00:26:26.082 27.927 - 28.044: 98.2299% ( 9) 00:26:26.082 28.044 - 28.160: 98.3421% ( 9) 00:26:26.082 28.160 - 28.276: 98.4418% ( 8) 00:26:26.082 28.276 - 28.393: 98.5290% ( 7) 00:26:26.082 28.393 - 28.509: 98.5914% ( 5) 00:26:26.082 28.509 - 28.625: 98.6662% ( 6) 00:26:26.082 28.625 - 28.742: 98.7285% ( 5) 00:26:26.082 28.742 - 28.858: 98.7784% ( 4) 00:26:26.082 28.858 - 28.975: 98.8158% ( 3) 00:26:26.082 28.975 - 29.091: 98.8282% ( 1) 00:26:26.082 29.091 - 29.207: 98.8781% ( 4) 00:26:26.082 29.207 - 29.324: 98.8906% ( 1) 00:26:26.082 29.324 - 29.440: 98.9030% ( 1) 00:26:26.082 29.440 - 29.556: 98.9155% ( 1) 00:26:26.082 29.673 - 29.789: 98.9279% ( 1) 00:26:26.082 29.789 - 30.022: 99.0027% ( 6) 00:26:26.082 30.022 - 30.255: 99.0152% ( 1) 00:26:26.082 30.255 - 30.487: 99.0277% ( 1) 00:26:26.082 30.487 - 30.720: 99.0401% ( 1) 00:26:26.082 30.953 - 31.185: 99.0900% ( 4) 00:26:26.082 31.185 - 31.418: 99.1149% ( 2) 00:26:26.082 31.418 - 31.651: 99.1399% ( 2) 00:26:26.082 31.651 - 31.884: 99.1773% ( 3) 00:26:26.082 31.884 - 32.116: 99.2147% ( 3) 00:26:26.082 32.116 - 32.349: 99.2521% ( 3) 00:26:26.082 32.349 - 32.582: 99.2645% ( 1) 00:26:26.082 32.582 - 32.815: 99.3144% ( 4) 00:26:26.082 32.815 - 33.047: 99.3518% ( 3) 00:26:26.083 33.513 - 33.745: 99.4016% ( 4) 00:26:26.083 33.978 - 34.211: 99.4141% ( 1) 00:26:26.083 34.211 - 34.444: 99.4515% ( 3) 00:26:26.083 34.444 - 34.676: 99.4764% ( 2) 00:26:26.083 34.676 - 34.909: 99.4889% ( 1) 00:26:26.083 34.909 - 35.142: 99.5014% ( 1) 00:26:26.083 35.142 - 35.375: 99.5263% ( 2) 00:26:26.083 35.375 - 35.607: 99.5388% ( 1) 00:26:26.083 36.073 - 36.305: 99.5762% ( 3) 00:26:26.083 36.305 - 36.538: 99.5886% ( 1) 00:26:26.083 36.538 - 36.771: 99.6011% ( 1) 00:26:26.083 37.004 - 37.236: 99.6136% ( 1) 00:26:26.083 37.236 - 37.469: 99.6385% ( 2) 00:26:26.083 37.935 - 38.167: 99.6510% ( 1) 00:26:26.083 38.400 - 38.633: 99.6634% ( 1) 00:26:26.083 38.633 - 38.865: 99.6759% ( 1) 00:26:26.083 38.865 - 39.098: 99.6884% ( 1) 00:26:26.083 39.098 - 39.331: 99.7008% ( 1) 00:26:26.083 39.796 - 40.029: 99.7133% ( 1) 00:26:26.083 40.029 - 40.262: 99.7258% ( 1) 00:26:26.083 40.262 - 40.495: 99.7507% ( 2) 00:26:26.083 40.727 - 40.960: 99.7756% ( 2) 00:26:26.083 40.960 - 41.193: 99.7881% ( 1) 00:26:26.083 41.193 - 41.425: 99.8005% ( 1) 00:26:26.083 41.425 - 41.658: 99.8130% ( 1) 00:26:26.083 41.658 - 41.891: 99.8255% ( 1) 00:26:26.083 41.891 - 42.124: 99.8379% ( 1) 00:26:26.083 42.822 - 43.055: 99.8504% ( 1) 00:26:26.083 43.753 - 43.985: 99.8629% ( 1) 00:26:26.083 45.847 - 46.080: 99.8753% ( 1) 00:26:26.083 47.244 - 47.476: 99.8878% ( 1) 00:26:26.083 47.942 - 48.175: 99.9003% ( 1) 00:26:26.083 48.640 - 48.873: 99.9127% ( 1) 00:26:26.083 55.156 - 55.389: 99.9252% ( 1) 00:26:26.083 56.553 - 56.785: 99.9377% ( 1) 00:26:26.083 57.251 - 57.484: 99.9501% ( 1) 00:26:26.083 59.113 - 59.345: 99.9626% ( 1) 00:26:26.083 59.578 - 60.044: 99.9751% ( 1) 00:26:26.083 73.076 - 73.542: 99.9875% ( 1) 00:26:26.083 104.262 - 104.727: 100.0000% ( 1) 00:26:26.083 00:26:26.083 Complete histogram 00:26:26.083 ================== 00:26:26.083 Range in us Cumulative Count 00:26:26.083 7.447 - 7.505: 0.0374% ( 3) 00:26:26.083 7.505 - 7.564: 0.9848% ( 76) 00:26:26.083 7.564 - 7.622: 4.4752% ( 280) 00:26:26.083 7.622 - 7.680: 8.4144% ( 316) 00:26:26.083 7.680 - 7.738: 10.7704% ( 189) 00:26:26.083 7.738 - 7.796: 12.0294% ( 101) 00:26:26.083 7.796 - 7.855: 13.6001% ( 126) 00:26:26.083 7.855 - 7.913: 15.0960% ( 120) 00:26:26.083 7.913 - 7.971: 17.5143% ( 194) 00:26:26.083 7.971 - 8.029: 20.8801% ( 270) 00:26:26.083 8.029 - 8.087: 23.3857% ( 201) 00:26:26.083 8.087 - 8.145: 26.1406% ( 221) 00:26:26.083 8.145 - 8.204: 30.5784% ( 356) 00:26:26.083 8.204 - 8.262: 34.9788% ( 353) 00:26:26.083 8.262 - 8.320: 37.6839% ( 217) 00:26:26.083 8.320 - 8.378: 41.0247% ( 268) 00:26:26.083 8.378 - 8.436: 43.9417% ( 234) 00:26:26.083 8.436 - 8.495: 46.1481% ( 177) 00:26:26.083 8.495 - 8.553: 47.2700% ( 90) 00:26:26.083 8.553 - 8.611: 48.3670% ( 88) 00:26:26.083 8.611 - 8.669: 49.5762% ( 97) 00:26:26.083 8.669 - 8.727: 50.2867% ( 57) 00:26:26.083 8.727 - 8.785: 50.6731% ( 31) 00:26:26.083 8.785 - 8.844: 51.0097% ( 27) 00:26:26.083 8.844 - 8.902: 51.2840% ( 22) 00:26:26.083 8.902 - 8.960: 51.5084% ( 18) 00:26:26.083 8.960 - 9.018: 51.7203% ( 17) 00:26:26.083 9.018 - 9.076: 51.8449% ( 10) 00:26:26.083 9.076 - 9.135: 51.9571% ( 9) 00:26:26.083 9.135 - 9.193: 52.1316% ( 14) 00:26:26.083 9.193 - 9.251: 52.2812% ( 12) 00:26:26.083 9.251 - 9.309: 52.4183% ( 11) 00:26:26.083 9.309 - 9.367: 52.5555% ( 11) 00:26:26.083 9.367 - 9.425: 52.6303% ( 6) 00:26:26.083 9.425 - 9.484: 52.6801% ( 4) 00:26:26.083 9.484 - 9.542: 52.7799% ( 8) 00:26:26.083 9.542 - 9.600: 52.8048% ( 2) 00:26:26.083 9.600 - 9.658: 52.8671% ( 5) 00:26:26.083 9.658 - 9.716: 52.9294% ( 5) 00:26:26.083 9.716 - 9.775: 52.9668% ( 3) 00:26:26.083 9.775 - 9.833: 53.0416% ( 6) 00:26:26.083 9.833 - 9.891: 53.0915% ( 4) 00:26:26.083 9.891 - 9.949: 53.1663% ( 6) 00:26:26.083 9.949 - 10.007: 53.2411% ( 6) 00:26:26.083 10.007 - 10.065: 53.2536% ( 1) 00:26:26.083 10.065 - 10.124: 53.2660% ( 1) 00:26:26.083 10.124 - 10.182: 53.2909% ( 2) 00:26:26.083 10.182 - 10.240: 53.3533% ( 5) 00:26:26.083 10.240 - 10.298: 53.3907% ( 3) 00:26:26.083 10.298 - 10.356: 53.4031% ( 1) 00:26:26.083 10.356 - 10.415: 53.5278% ( 10) 00:26:26.083 10.415 - 10.473: 53.5403% ( 1) 00:26:26.083 10.473 - 10.531: 53.5901% ( 4) 00:26:26.083 10.531 - 10.589: 53.6151% ( 2) 00:26:26.083 10.589 - 10.647: 53.6649% ( 4) 00:26:26.083 10.647 - 10.705: 53.7023% ( 3) 00:26:26.083 10.705 - 10.764: 53.7273% ( 2) 00:26:26.083 10.764 - 10.822: 53.8145% ( 7) 00:26:26.083 10.822 - 10.880: 53.9142% ( 8) 00:26:26.083 10.880 - 10.938: 53.9766% ( 5) 00:26:26.083 10.938 - 10.996: 54.0264% ( 4) 00:26:26.083 10.996 - 11.055: 54.0638% ( 3) 00:26:26.083 11.055 - 11.113: 54.0763% ( 1) 00:26:26.083 11.113 - 11.171: 54.1012% ( 2) 00:26:26.083 11.171 - 11.229: 54.1386% ( 3) 00:26:26.083 11.229 - 11.287: 54.1885% ( 4) 00:26:26.083 11.287 - 11.345: 54.2383% ( 4) 00:26:26.083 11.345 - 11.404: 54.2757% ( 3) 00:26:26.083 11.404 - 11.462: 54.4129% ( 11) 00:26:26.083 11.462 - 11.520: 56.9185% ( 201) 00:26:26.083 11.520 - 11.578: 64.9713% ( 646) 00:26:26.083 11.578 - 11.636: 74.3954% ( 756) 00:26:26.083 11.636 - 11.695: 80.2418% ( 469) 00:26:26.083 11.695 - 11.753: 82.9220% ( 215) 00:26:26.083 11.753 - 11.811: 83.9566% ( 83) 00:26:26.083 11.811 - 11.869: 84.4926% ( 43) 00:26:26.083 11.869 - 11.927: 84.7420% ( 20) 00:26:26.083 11.927 - 11.985: 84.8292% ( 7) 00:26:26.083 11.985 - 12.044: 84.9289% ( 8) 00:26:26.083 12.044 - 12.102: 85.0037% ( 6) 00:26:26.083 12.102 - 12.160: 85.0661% ( 5) 00:26:26.083 12.160 - 12.218: 85.1035% ( 3) 00:26:26.083 12.218 - 12.276: 85.1409% ( 3) 00:26:26.083 12.276 - 12.335: 85.2281% ( 7) 00:26:26.083 12.335 - 12.393: 85.3029% ( 6) 00:26:26.083 12.393 - 12.451: 85.3403% ( 3) 00:26:26.083 12.451 - 12.509: 85.5273% ( 15) 00:26:26.083 12.509 - 12.567: 85.6644% ( 11) 00:26:26.083 12.567 - 12.625: 85.8265% ( 13) 00:26:26.083 12.625 - 12.684: 86.0384% ( 17) 00:26:26.083 12.684 - 12.742: 86.2378% ( 16) 00:26:26.083 12.742 - 12.800: 86.3376% ( 8) 00:26:26.083 12.800 - 12.858: 86.4373% ( 8) 00:26:26.083 12.858 - 12.916: 86.5495% ( 9) 00:26:26.083 12.916 - 12.975: 86.6243% ( 6) 00:26:26.083 12.975 - 13.033: 86.6617% ( 3) 00:26:26.083 13.033 - 13.091: 86.6991% ( 3) 00:26:26.083 13.091 - 13.149: 86.7739% ( 6) 00:26:26.083 13.149 - 13.207: 86.8736% ( 8) 00:26:26.083 13.207 - 13.265: 86.9484% ( 6) 00:26:26.083 13.265 - 13.324: 87.0730% ( 10) 00:26:26.083 13.324 - 13.382: 87.1478% ( 6) 00:26:26.083 13.382 - 13.440: 87.2102% ( 5) 00:26:26.083 13.440 - 13.498: 87.3099% ( 8) 00:26:26.083 13.498 - 13.556: 87.4096% ( 8) 00:26:26.083 13.556 - 13.615: 87.5343% ( 10) 00:26:26.083 13.615 - 13.673: 87.5966% ( 5) 00:26:26.083 13.673 - 13.731: 87.6465% ( 4) 00:26:26.083 13.731 - 13.789: 87.7088% ( 5) 00:26:26.083 13.789 - 13.847: 87.7836% ( 6) 00:26:26.083 13.847 - 13.905: 87.8459% ( 5) 00:26:26.083 13.905 - 13.964: 87.9706% ( 10) 00:26:26.083 13.964 - 14.022: 88.0204% ( 4) 00:26:26.083 14.022 - 14.080: 88.0454% ( 2) 00:26:26.083 14.080 - 14.138: 88.1077% ( 5) 00:26:26.083 14.138 - 14.196: 88.1700% ( 5) 00:26:26.083 14.196 - 14.255: 88.1950% ( 2) 00:26:26.083 14.255 - 14.313: 88.2698% ( 6) 00:26:26.083 14.313 - 14.371: 88.3695% ( 8) 00:26:26.083 14.371 - 14.429: 88.4318% ( 5) 00:26:26.083 14.429 - 14.487: 88.4567% ( 2) 00:26:26.083 14.487 - 14.545: 88.4941% ( 3) 00:26:26.084 14.545 - 14.604: 88.5191% ( 2) 00:26:26.084 14.604 - 14.662: 88.5565% ( 3) 00:26:26.084 14.662 - 14.720: 88.6188% ( 5) 00:26:26.084 14.720 - 14.778: 88.6437% ( 2) 00:26:26.084 14.778 - 14.836: 88.7061% ( 5) 00:26:26.084 14.836 - 14.895: 88.7559% ( 4) 00:26:26.084 14.895 - 15.011: 88.8307% ( 6) 00:26:26.084 15.011 - 15.127: 88.9055% ( 6) 00:26:26.084 15.127 - 15.244: 88.9554% ( 4) 00:26:26.084 15.244 - 15.360: 89.0177% ( 5) 00:26:26.084 15.360 - 15.476: 89.0800% ( 5) 00:26:26.084 15.476 - 15.593: 89.1050% ( 2) 00:26:26.084 15.593 - 15.709: 89.1673% ( 5) 00:26:26.084 15.709 - 15.825: 89.2172% ( 4) 00:26:26.084 15.825 - 15.942: 89.2919% ( 6) 00:26:26.084 15.942 - 16.058: 89.3543% ( 5) 00:26:26.084 16.058 - 16.175: 89.3917% ( 3) 00:26:26.084 16.175 - 16.291: 89.4789% ( 7) 00:26:26.084 16.291 - 16.407: 89.5288% ( 4) 00:26:26.084 16.407 - 16.524: 89.5911% ( 5) 00:26:26.084 16.524 - 16.640: 89.6535% ( 5) 00:26:26.084 16.640 - 16.756: 89.7158% ( 5) 00:26:26.084 16.756 - 16.873: 89.8529% ( 11) 00:26:26.084 16.873 - 16.989: 89.9776% ( 10) 00:26:26.084 16.989 - 17.105: 90.1022% ( 10) 00:26:26.084 17.105 - 17.222: 90.3141% ( 17) 00:26:26.084 17.222 - 17.338: 90.4637% ( 12) 00:26:26.084 17.338 - 17.455: 90.6008% ( 11) 00:26:26.084 17.455 - 17.571: 90.7380% ( 11) 00:26:26.084 17.571 - 17.687: 90.8876% ( 12) 00:26:26.084 17.687 - 17.804: 91.0247% ( 11) 00:26:26.084 17.804 - 17.920: 91.1493% ( 10) 00:26:26.084 17.920 - 18.036: 91.2366% ( 7) 00:26:26.084 18.036 - 18.153: 91.3737% ( 11) 00:26:26.084 18.153 - 18.269: 91.4734% ( 8) 00:26:26.084 18.269 - 18.385: 91.5358% ( 5) 00:26:26.084 18.385 - 18.502: 91.6106% ( 6) 00:26:26.084 18.502 - 18.618: 91.6480% ( 3) 00:26:26.084 18.618 - 18.735: 91.7103% ( 5) 00:26:26.084 18.735 - 18.851: 91.7602% ( 4) 00:26:26.084 18.851 - 18.967: 91.8100% ( 4) 00:26:26.084 18.967 - 19.084: 91.8599% ( 4) 00:26:26.084 19.084 - 19.200: 91.8724% ( 1) 00:26:26.084 19.200 - 19.316: 91.8973% ( 2) 00:26:26.084 19.316 - 19.433: 91.9222% ( 2) 00:26:26.084 19.549 - 19.665: 91.9845% ( 5) 00:26:26.084 19.665 - 19.782: 92.0344% ( 4) 00:26:26.084 19.782 - 19.898: 92.1092% ( 6) 00:26:26.084 19.898 - 20.015: 92.1341% ( 2) 00:26:26.084 20.015 - 20.131: 92.1840% ( 4) 00:26:26.084 20.131 - 20.247: 92.2214% ( 3) 00:26:26.084 20.247 - 20.364: 92.2463% ( 2) 00:26:26.084 20.364 - 20.480: 92.2837% ( 3) 00:26:26.084 20.480 - 20.596: 92.2962% ( 1) 00:26:26.084 20.596 - 20.713: 92.3211% ( 2) 00:26:26.084 20.829 - 20.945: 92.3710% ( 4) 00:26:26.084 20.945 - 21.062: 92.3959% ( 2) 00:26:26.084 21.062 - 21.178: 92.4333% ( 3) 00:26:26.084 21.295 - 21.411: 92.4458% ( 1) 00:26:26.084 21.411 - 21.527: 92.4832% ( 3) 00:26:26.084 21.527 - 21.644: 92.5330% ( 4) 00:26:26.084 21.644 - 21.760: 92.5829% ( 4) 00:26:26.084 21.760 - 21.876: 92.6078% ( 2) 00:26:26.084 21.876 - 21.993: 92.6203% ( 1) 00:26:26.084 21.993 - 22.109: 92.7325% ( 9) 00:26:26.084 22.109 - 22.225: 92.9569% ( 18) 00:26:26.084 22.225 - 22.342: 93.1688% ( 17) 00:26:26.084 22.342 - 22.458: 93.4804% ( 25) 00:26:26.084 22.458 - 22.575: 93.6923% ( 17) 00:26:26.084 22.575 - 22.691: 94.0414% ( 28) 00:26:26.084 22.691 - 22.807: 94.3281% ( 23) 00:26:26.084 22.807 - 22.924: 94.5774% ( 20) 00:26:26.084 22.924 - 23.040: 94.9763% ( 32) 00:26:26.084 23.040 - 23.156: 95.2506% ( 22) 00:26:26.084 23.156 - 23.273: 95.4126% ( 13) 00:26:26.084 23.273 - 23.389: 95.5871% ( 14) 00:26:26.084 23.389 - 23.505: 95.7492% ( 13) 00:26:26.084 23.505 - 23.622: 95.9237% ( 14) 00:26:26.084 23.622 - 23.738: 96.1481% ( 18) 00:26:26.084 23.738 - 23.855: 96.7464% ( 48) 00:26:26.084 23.855 - 23.971: 97.3573% ( 49) 00:26:26.084 23.971 - 24.087: 97.8185% ( 37) 00:26:26.084 24.087 - 24.204: 98.1301% ( 25) 00:26:26.084 24.204 - 24.320: 98.3421% ( 17) 00:26:26.084 24.320 - 24.436: 98.4169% ( 6) 00:26:26.084 24.436 - 24.553: 98.5166% ( 8) 00:26:26.084 24.553 - 24.669: 98.5540% ( 3) 00:26:26.084 24.669 - 24.785: 98.6163% ( 5) 00:26:26.084 24.785 - 24.902: 98.6412% ( 2) 00:26:26.084 24.902 - 25.018: 98.7036% ( 5) 00:26:26.084 25.018 - 25.135: 98.7285% ( 2) 00:26:26.084 25.135 - 25.251: 98.7410% ( 1) 00:26:26.084 25.484 - 25.600: 98.7659% ( 2) 00:26:26.084 25.716 - 25.833: 98.7784% ( 1) 00:26:26.084 25.833 - 25.949: 98.8033% ( 2) 00:26:26.084 26.065 - 26.182: 98.8282% ( 2) 00:26:26.084 26.298 - 26.415: 98.8407% ( 1) 00:26:26.084 26.415 - 26.531: 98.8781% ( 3) 00:26:26.084 26.531 - 26.647: 98.8906% ( 1) 00:26:26.084 26.647 - 26.764: 98.9155% ( 2) 00:26:26.084 26.764 - 26.880: 98.9404% ( 2) 00:26:26.084 26.880 - 26.996: 98.9529% ( 1) 00:26:26.084 26.996 - 27.113: 98.9778% ( 2) 00:26:26.084 27.113 - 27.229: 98.9903% ( 1) 00:26:26.084 27.229 - 27.345: 99.0152% ( 2) 00:26:26.084 27.345 - 27.462: 99.0401% ( 2) 00:26:26.084 27.462 - 27.578: 99.1025% ( 5) 00:26:26.084 27.578 - 27.695: 99.1399% ( 3) 00:26:26.084 27.695 - 27.811: 99.1648% ( 2) 00:26:26.084 27.811 - 27.927: 99.1897% ( 2) 00:26:26.084 28.160 - 28.276: 99.2147% ( 2) 00:26:26.084 28.276 - 28.393: 99.2521% ( 3) 00:26:26.084 28.509 - 28.625: 99.2645% ( 1) 00:26:26.084 28.742 - 28.858: 99.2895% ( 2) 00:26:26.084 28.858 - 28.975: 99.3393% ( 4) 00:26:26.084 28.975 - 29.091: 99.3518% ( 1) 00:26:26.084 29.091 - 29.207: 99.3642% ( 1) 00:26:26.084 29.324 - 29.440: 99.4016% ( 3) 00:26:26.084 29.440 - 29.556: 99.4141% ( 1) 00:26:26.084 29.556 - 29.673: 99.4390% ( 2) 00:26:26.084 29.673 - 29.789: 99.4640% ( 2) 00:26:26.084 30.022 - 30.255: 99.4764% ( 1) 00:26:26.084 30.255 - 30.487: 99.5014% ( 2) 00:26:26.084 30.487 - 30.720: 99.5263% ( 2) 00:26:26.084 30.720 - 30.953: 99.5388% ( 1) 00:26:26.084 31.418 - 31.651: 99.5512% ( 1) 00:26:26.084 31.884 - 32.116: 99.5762% ( 2) 00:26:26.084 32.349 - 32.582: 99.5886% ( 1) 00:26:26.084 32.815 - 33.047: 99.6136% ( 2) 00:26:26.084 33.280 - 33.513: 99.6634% ( 4) 00:26:26.084 33.513 - 33.745: 99.6759% ( 1) 00:26:26.084 33.978 - 34.211: 99.6884% ( 1) 00:26:26.084 35.142 - 35.375: 99.7133% ( 2) 00:26:26.084 35.840 - 36.073: 99.7258% ( 1) 00:26:26.084 36.073 - 36.305: 99.7507% ( 2) 00:26:26.084 37.004 - 37.236: 99.7632% ( 1) 00:26:26.084 37.236 - 37.469: 99.7756% ( 1) 00:26:26.084 38.167 - 38.400: 99.7881% ( 1) 00:26:26.084 38.865 - 39.098: 99.8005% ( 1) 00:26:26.084 39.331 - 39.564: 99.8130% ( 1) 00:26:26.084 41.891 - 42.124: 99.8255% ( 1) 00:26:26.084 42.589 - 42.822: 99.8379% ( 1) 00:26:26.084 43.287 - 43.520: 99.8504% ( 1) 00:26:26.084 44.451 - 44.684: 99.8629% ( 1) 00:26:26.084 44.916 - 45.149: 99.8753% ( 1) 00:26:26.084 46.080 - 46.313: 99.8878% ( 1) 00:26:26.084 46.313 - 46.545: 99.9003% ( 1) 00:26:26.084 47.244 - 47.476: 99.9127% ( 1) 00:26:26.084 51.200 - 51.433: 99.9252% ( 1) 00:26:26.084 54.691 - 54.924: 99.9377% ( 1) 00:26:26.084 56.320 - 56.553: 99.9501% ( 1) 00:26:26.084 64.233 - 64.698: 99.9626% ( 1) 00:26:26.084 69.353 - 69.818: 99.9751% ( 1) 00:26:26.084 141.498 - 142.429: 99.9875% ( 1) 00:26:26.084 146.153 - 147.084: 100.0000% ( 1) 00:26:26.084 00:26:26.084 00:26:26.084 real 0m1.248s 00:26:26.084 user 0m1.095s 00:26:26.084 sys 0m0.089s 00:26:26.084 14:28:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:26.084 ************************************ 00:26:26.084 END TEST nvme_overhead 00:26:26.084 ************************************ 00:26:26.084 14:28:17 -- common/autotest_common.sh@10 -- # set +x 00:26:26.084 14:28:17 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:26:26.084 14:28:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:26.084 14:28:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.084 14:28:17 -- common/autotest_common.sh@10 -- # set +x 00:26:26.084 ************************************ 00:26:26.084 START TEST nvme_arbitration 00:26:26.084 ************************************ 00:26:26.084 14:28:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:26:29.361 Initializing NVMe Controllers 00:26:29.361 Attached to 0000:00:06.0 00:26:29.361 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:26:29.361 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:26:29.361 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:26:29.361 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:26:29.361 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:26:29.361 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:26:29.361 Initialization complete. Launching workers. 00:26:29.361 Starting thread on core 1 with urgent priority queue 00:26:29.361 Starting thread on core 2 with urgent priority queue 00:26:29.361 Starting thread on core 3 with urgent priority queue 00:26:29.361 Starting thread on core 0 with urgent priority queue 00:26:29.361 QEMU NVMe Ctrl (12340 ) core 0: 7174.67 IO/s 13.94 secs/100000 ios 00:26:29.361 QEMU NVMe Ctrl (12340 ) core 1: 7333.00 IO/s 13.64 secs/100000 ios 00:26:29.361 QEMU NVMe Ctrl (12340 ) core 2: 4110.67 IO/s 24.33 secs/100000 ios 00:26:29.361 QEMU NVMe Ctrl (12340 ) core 3: 4109.67 IO/s 24.33 secs/100000 ios 00:26:29.361 ======================================================== 00:26:29.361 00:26:29.361 00:26:29.361 real 0m3.291s 00:26:29.361 user 0m9.186s 00:26:29.361 sys 0m0.073s 00:26:29.361 14:28:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:29.361 ************************************ 00:26:29.361 END TEST nvme_arbitration 00:26:29.361 ************************************ 00:26:29.361 14:28:21 -- common/autotest_common.sh@10 -- # set +x 00:26:29.362 14:28:21 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:26:29.362 14:28:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:29.362 14:28:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.362 14:28:21 -- common/autotest_common.sh@10 -- # set +x 00:26:29.362 ************************************ 00:26:29.362 START TEST nvme_single_aen 00:26:29.362 ************************************ 00:26:29.362 14:28:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:26:29.362 [2024-11-18 14:28:21.275806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:29.362 [2024-11-18 14:28:21.275919] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.362 [2024-11-18 14:28:21.408901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:29.621 Asynchronous Event Request test 00:26:29.621 Attached to 0000:00:06.0 00:26:29.621 Reset controller to setup AER completions for this process 00:26:29.621 Registering asynchronous event callbacks... 00:26:29.621 Getting orig temperature thresholds of all controllers 00:26:29.621 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:29.621 Setting all controllers temperature threshold low to trigger AER 00:26:29.621 Waiting for all controllers temperature threshold to be set lower 00:26:29.621 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:29.621 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:26:29.621 Waiting for all controllers to trigger AER and reset threshold 00:26:29.621 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:29.621 Cleaning up... 00:26:29.621 00:26:29.621 real 0m0.204s 00:26:29.621 user 0m0.069s 00:26:29.621 sys 0m0.066s 00:26:29.621 ************************************ 00:26:29.621 END TEST nvme_single_aen 00:26:29.621 ************************************ 00:26:29.621 14:28:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:29.621 14:28:21 -- common/autotest_common.sh@10 -- # set +x 00:26:29.621 14:28:21 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:26:29.621 14:28:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:29.621 14:28:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:29.621 14:28:21 -- common/autotest_common.sh@10 -- # set +x 00:26:29.621 ************************************ 00:26:29.621 START TEST nvme_doorbell_aers 00:26:29.621 ************************************ 00:26:29.621 14:28:21 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:26:29.621 14:28:21 -- nvme/nvme.sh@70 -- # bdfs=() 00:26:29.621 14:28:21 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:26:29.621 14:28:21 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:26:29.621 14:28:21 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:26:29.621 14:28:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:29.621 14:28:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:29.621 14:28:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:29.621 14:28:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:29.621 14:28:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:29.621 14:28:21 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:29.621 14:28:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:26:29.621 14:28:21 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:26:29.621 14:28:21 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:29.879 [2024-11-18 14:28:21.816299] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148062) is not found. Dropping the request. 00:26:39.853 Executing: test_write_invalid_db 00:26:39.853 Waiting for AER completion... 00:26:39.853 Failure: test_write_invalid_db 00:26:39.853 00:26:39.853 Executing: test_invalid_db_write_overflow_sq 00:26:39.853 Waiting for AER completion... 00:26:39.853 Failure: test_invalid_db_write_overflow_sq 00:26:39.853 00:26:39.853 Executing: test_invalid_db_write_overflow_cq 00:26:39.853 Waiting for AER completion... 00:26:39.853 Failure: test_invalid_db_write_overflow_cq 00:26:39.853 00:26:39.853 00:26:39.853 real 0m10.102s 00:26:39.853 user 0m8.563s 00:26:39.853 sys 0m1.466s 00:26:39.853 14:28:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:39.853 ************************************ 00:26:39.853 END TEST nvme_doorbell_aers 00:26:39.853 ************************************ 00:26:39.853 14:28:31 -- common/autotest_common.sh@10 -- # set +x 00:26:39.853 14:28:31 -- nvme/nvme.sh@97 -- # uname 00:26:39.853 14:28:31 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:26:39.853 14:28:31 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:26:39.853 14:28:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:26:39.853 14:28:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:39.853 14:28:31 -- common/autotest_common.sh@10 -- # set +x 00:26:39.853 ************************************ 00:26:39.853 START TEST nvme_multi_aen 00:26:39.853 ************************************ 00:26:39.853 14:28:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:26:39.853 [2024-11-18 14:28:31.671590] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:39.853 [2024-11-18 14:28:31.671737] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.853 [2024-11-18 14:28:31.889554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:39.853 [2024-11-18 14:28:31.889624] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148062) is not found. Dropping the request. 00:26:39.853 [2024-11-18 14:28:31.890120] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148062) is not found. Dropping the request. 00:26:39.853 [2024-11-18 14:28:31.890291] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148062) is not found. Dropping the request. 00:26:39.853 [2024-11-18 14:28:31.896359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:39.853 Child process pid: 148257 00:26:39.853 [2024-11-18 14:28:31.896617] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.112 [Child] Asynchronous Event Request test 00:26:40.112 [Child] Attached to 0000:00:06.0 00:26:40.112 [Child] Registering asynchronous event callbacks... 00:26:40.112 [Child] Getting orig temperature thresholds of all controllers 00:26:40.112 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:40.112 [Child] Waiting for all controllers to trigger AER and reset threshold 00:26:40.112 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:40.112 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:40.112 [Child] Cleaning up... 00:26:40.371 Asynchronous Event Request test 00:26:40.371 Attached to 0000:00:06.0 00:26:40.371 Reset controller to setup AER completions for this process 00:26:40.371 Registering asynchronous event callbacks... 00:26:40.371 Getting orig temperature thresholds of all controllers 00:26:40.371 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:40.371 Setting all controllers temperature threshold low to trigger AER 00:26:40.371 Waiting for all controllers temperature threshold to be set lower 00:26:40.371 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:40.371 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:26:40.371 Waiting for all controllers to trigger AER and reset threshold 00:26:40.371 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:40.371 Cleaning up... 00:26:40.371 00:26:40.371 real 0m0.570s 00:26:40.371 user 0m0.170s 00:26:40.371 sys 0m0.185s 00:26:40.371 14:28:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.371 ************************************ 00:26:40.371 END TEST nvme_multi_aen 00:26:40.371 ************************************ 00:26:40.371 14:28:32 -- common/autotest_common.sh@10 -- # set +x 00:26:40.371 14:28:32 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:26:40.371 14:28:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:40.371 14:28:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.371 14:28:32 -- common/autotest_common.sh@10 -- # set +x 00:26:40.371 ************************************ 00:26:40.371 START TEST nvme_startup 00:26:40.371 ************************************ 00:26:40.371 14:28:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:26:40.630 Initializing NVMe Controllers 00:26:40.630 Attached to 0000:00:06.0 00:26:40.630 Initialization complete. 00:26:40.630 Time used:142519.250 (us). 00:26:40.630 00:26:40.630 real 0m0.211s 00:26:40.630 user 0m0.054s 00:26:40.630 sys 0m0.095s 00:26:40.630 ************************************ 00:26:40.630 END TEST nvme_startup 00:26:40.630 ************************************ 00:26:40.630 14:28:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:40.630 14:28:32 -- common/autotest_common.sh@10 -- # set +x 00:26:40.630 14:28:32 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:26:40.630 14:28:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:40.630 14:28:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.630 14:28:32 -- common/autotest_common.sh@10 -- # set +x 00:26:40.630 ************************************ 00:26:40.630 START TEST nvme_multi_secondary 00:26:40.630 ************************************ 00:26:40.630 14:28:32 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:26:40.630 14:28:32 -- nvme/nvme.sh@52 -- # pid0=148324 00:26:40.630 14:28:32 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:26:40.630 14:28:32 -- nvme/nvme.sh@54 -- # pid1=148325 00:26:40.630 14:28:32 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:26:40.630 14:28:32 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:26:43.919 Initializing NVMe Controllers 00:26:43.919 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:43.919 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:26:43.919 Initialization complete. Launching workers. 00:26:43.919 ======================================================== 00:26:43.919 Latency(us) 00:26:43.919 Device Information : IOPS MiB/s Average min max 00:26:43.919 PCIE (0000:00:06.0) NSID 1 from core 1: 36527.37 142.69 437.71 97.77 16624.90 00:26:43.919 ======================================================== 00:26:43.919 Total : 36527.37 142.69 437.71 97.77 16624.90 00:26:43.919 00:26:44.178 Initializing NVMe Controllers 00:26:44.178 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:44.178 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:26:44.178 Initialization complete. Launching workers. 00:26:44.178 ======================================================== 00:26:44.178 Latency(us) 00:26:44.178 Device Information : IOPS MiB/s Average min max 00:26:44.178 PCIE (0000:00:06.0) NSID 1 from core 2: 14018.00 54.76 1140.67 132.27 20713.07 00:26:44.178 ======================================================== 00:26:44.178 Total : 14018.00 54.76 1140.67 132.27 20713.07 00:26:44.178 00:26:44.178 14:28:36 -- nvme/nvme.sh@56 -- # wait 148324 00:26:46.083 Initializing NVMe Controllers 00:26:46.083 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:46.083 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:46.083 Initialization complete. Launching workers. 00:26:46.083 ======================================================== 00:26:46.083 Latency(us) 00:26:46.083 Device Information : IOPS MiB/s Average min max 00:26:46.083 PCIE (0000:00:06.0) NSID 1 from core 0: 43114.62 168.42 370.78 86.68 1182.36 00:26:46.083 ======================================================== 00:26:46.083 Total : 43114.62 168.42 370.78 86.68 1182.36 00:26:46.083 00:26:46.083 14:28:37 -- nvme/nvme.sh@57 -- # wait 148325 00:26:46.083 14:28:37 -- nvme/nvme.sh@61 -- # pid0=148405 00:26:46.083 14:28:37 -- nvme/nvme.sh@63 -- # pid1=148406 00:26:46.083 14:28:37 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:26:46.083 14:28:37 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:26:46.083 14:28:37 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:26:49.374 Initializing NVMe Controllers 00:26:49.374 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:49.374 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:26:49.374 Initialization complete. Launching workers. 00:26:49.374 ======================================================== 00:26:49.374 Latency(us) 00:26:49.374 Device Information : IOPS MiB/s Average min max 00:26:49.374 PCIE (0000:00:06.0) NSID 1 from core 0: 36528.81 142.69 437.73 115.52 1657.03 00:26:49.374 ======================================================== 00:26:49.374 Total : 36528.81 142.69 437.73 115.52 1657.03 00:26:49.374 00:26:49.374 Initializing NVMe Controllers 00:26:49.374 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:49.374 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:26:49.374 Initialization complete. Launching workers. 00:26:49.374 ======================================================== 00:26:49.374 Latency(us) 00:26:49.374 Device Information : IOPS MiB/s Average min max 00:26:49.374 PCIE (0000:00:06.0) NSID 1 from core 1: 35486.67 138.62 450.54 117.75 1855.25 00:26:49.374 ======================================================== 00:26:49.374 Total : 35486.67 138.62 450.54 117.75 1855.25 00:26:49.374 00:26:51.907 Initializing NVMe Controllers 00:26:51.907 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:26:51.907 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:26:51.907 Initialization complete. Launching workers. 00:26:51.907 ======================================================== 00:26:51.907 Latency(us) 00:26:51.907 Device Information : IOPS MiB/s Average min max 00:26:51.907 PCIE (0000:00:06.0) NSID 1 from core 2: 17251.80 67.39 926.73 120.44 24753.30 00:26:51.907 ======================================================== 00:26:51.907 Total : 17251.80 67.39 926.73 120.44 24753.30 00:26:51.907 00:26:51.907 14:28:43 -- nvme/nvme.sh@65 -- # wait 148405 00:26:51.907 14:28:43 -- nvme/nvme.sh@66 -- # wait 148406 00:26:51.907 00:26:51.907 real 0m10.976s 00:26:51.907 user 0m18.509s 00:26:51.907 sys 0m0.673s 00:26:51.907 14:28:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:51.907 ************************************ 00:26:51.907 END TEST nvme_multi_secondary 00:26:51.907 ************************************ 00:26:51.907 14:28:43 -- common/autotest_common.sh@10 -- # set +x 00:26:51.907 14:28:43 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:26:51.907 14:28:43 -- nvme/nvme.sh@102 -- # kill_stub 00:26:51.907 14:28:43 -- common/autotest_common.sh@1075 -- # [[ -e /proc/147628 ]] 00:26:51.907 14:28:43 -- common/autotest_common.sh@1076 -- # kill 147628 00:26:51.907 14:28:43 -- common/autotest_common.sh@1077 -- # wait 147628 00:26:52.475 [2024-11-18 14:28:44.310845] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148256) is not found. Dropping the request. 00:26:52.475 [2024-11-18 14:28:44.311029] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148256) is not found. Dropping the request. 00:26:52.475 [2024-11-18 14:28:44.311110] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148256) is not found. Dropping the request. 00:26:52.475 [2024-11-18 14:28:44.311621] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 148256) is not found. Dropping the request. 00:26:52.475 14:28:44 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:26:52.475 14:28:44 -- common/autotest_common.sh@1083 -- # echo 2 00:26:52.475 14:28:44 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:26:52.475 14:28:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:52.475 14:28:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:52.475 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:26:52.475 ************************************ 00:26:52.475 START TEST bdev_nvme_reset_stuck_adm_cmd 00:26:52.475 ************************************ 00:26:52.475 14:28:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:26:52.475 * Looking for test storage... 00:26:52.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:52.475 14:28:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:52.475 14:28:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:52.475 14:28:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:52.735 14:28:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:52.735 14:28:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:52.735 14:28:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:52.735 14:28:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:52.735 14:28:44 -- scripts/common.sh@335 -- # IFS=.-: 00:26:52.735 14:28:44 -- scripts/common.sh@335 -- # read -ra ver1 00:26:52.735 14:28:44 -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.735 14:28:44 -- scripts/common.sh@336 -- # read -ra ver2 00:26:52.735 14:28:44 -- scripts/common.sh@337 -- # local 'op=<' 00:26:52.735 14:28:44 -- scripts/common.sh@339 -- # ver1_l=2 00:26:52.735 14:28:44 -- scripts/common.sh@340 -- # ver2_l=1 00:26:52.735 14:28:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:52.735 14:28:44 -- scripts/common.sh@343 -- # case "$op" in 00:26:52.735 14:28:44 -- scripts/common.sh@344 -- # : 1 00:26:52.735 14:28:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:52.735 14:28:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.735 14:28:44 -- scripts/common.sh@364 -- # decimal 1 00:26:52.735 14:28:44 -- scripts/common.sh@352 -- # local d=1 00:26:52.735 14:28:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.735 14:28:44 -- scripts/common.sh@354 -- # echo 1 00:26:52.735 14:28:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:52.735 14:28:44 -- scripts/common.sh@365 -- # decimal 2 00:26:52.735 14:28:44 -- scripts/common.sh@352 -- # local d=2 00:26:52.735 14:28:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.735 14:28:44 -- scripts/common.sh@354 -- # echo 2 00:26:52.735 14:28:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:52.735 14:28:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:52.735 14:28:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:52.735 14:28:44 -- scripts/common.sh@367 -- # return 0 00:26:52.735 14:28:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.735 14:28:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.735 --rc genhtml_branch_coverage=1 00:26:52.735 --rc genhtml_function_coverage=1 00:26:52.735 --rc genhtml_legend=1 00:26:52.735 --rc geninfo_all_blocks=1 00:26:52.735 --rc geninfo_unexecuted_blocks=1 00:26:52.735 00:26:52.735 ' 00:26:52.735 14:28:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.735 --rc genhtml_branch_coverage=1 00:26:52.735 --rc genhtml_function_coverage=1 00:26:52.735 --rc genhtml_legend=1 00:26:52.735 --rc geninfo_all_blocks=1 00:26:52.735 --rc geninfo_unexecuted_blocks=1 00:26:52.735 00:26:52.735 ' 00:26:52.735 14:28:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.735 --rc genhtml_branch_coverage=1 00:26:52.735 --rc genhtml_function_coverage=1 00:26:52.735 --rc genhtml_legend=1 00:26:52.735 --rc geninfo_all_blocks=1 00:26:52.735 --rc geninfo_unexecuted_blocks=1 00:26:52.735 00:26:52.735 ' 00:26:52.735 14:28:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:52.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.735 --rc genhtml_branch_coverage=1 00:26:52.735 --rc genhtml_function_coverage=1 00:26:52.735 --rc genhtml_legend=1 00:26:52.735 --rc geninfo_all_blocks=1 00:26:52.735 --rc geninfo_unexecuted_blocks=1 00:26:52.735 00:26:52.735 ' 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:26:52.735 14:28:44 -- common/autotest_common.sh@1519 -- # bdfs=() 00:26:52.735 14:28:44 -- common/autotest_common.sh@1519 -- # local bdfs 00:26:52.735 14:28:44 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:26:52.735 14:28:44 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:26:52.735 14:28:44 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:52.735 14:28:44 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:52.735 14:28:44 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:52.735 14:28:44 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:52.735 14:28:44 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:52.735 14:28:44 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:52.735 14:28:44 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:26:52.735 14:28:44 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=148566 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:52.735 14:28:44 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 148566 00:26:52.735 14:28:44 -- common/autotest_common.sh@829 -- # '[' -z 148566 ']' 00:26:52.735 14:28:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.735 14:28:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.735 14:28:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.735 14:28:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.735 14:28:44 -- common/autotest_common.sh@10 -- # set +x 00:26:52.735 [2024-11-18 14:28:44.753535] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:52.735 [2024-11-18 14:28:44.753774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148566 ] 00:26:52.995 [2024-11-18 14:28:44.940552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.995 [2024-11-18 14:28:45.016233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:52.995 [2024-11-18 14:28:45.017022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.995 [2024-11-18 14:28:45.017173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.995 [2024-11-18 14:28:45.017281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.995 [2024-11-18 14:28:45.017282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.931 14:28:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:53.931 14:28:45 -- common/autotest_common.sh@862 -- # return 0 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:26:53.931 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.931 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:26:53.931 nvme0n1 00:26:53.931 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WUoTg.txt 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:26:53.931 14:28:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.931 14:28:45 -- common/autotest_common.sh@10 -- # set +x 00:26:53.931 true 00:26:53.931 14:28:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731940125 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=148594 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:53.931 14:28:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:55.836 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.836 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:26:55.836 [2024-11-18 14:28:47.783732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:26:55.836 [2024-11-18 14:28:47.784537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.836 [2024-11-18 14:28:47.784759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:55.836 [2024-11-18 14:28:47.784923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.836 [2024-11-18 14:28:47.787011] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:55.836 14:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 148594 00:26:55.836 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 148594 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 148594 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.836 14:28:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.836 14:28:47 -- common/autotest_common.sh@10 -- # set +x 00:26:55.836 14:28:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WUoTg.txt 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WUoTg.txt 00:26:55.836 14:28:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 148566 00:26:55.836 14:28:47 -- common/autotest_common.sh@936 -- # '[' -z 148566 ']' 00:26:55.836 14:28:47 -- common/autotest_common.sh@940 -- # kill -0 148566 00:26:55.836 14:28:47 -- common/autotest_common.sh@941 -- # uname 00:26:55.836 14:28:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:55.836 14:28:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148566 00:26:56.095 14:28:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:56.095 14:28:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:56.095 14:28:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148566' 00:26:56.095 killing process with pid 148566 00:26:56.095 14:28:47 -- common/autotest_common.sh@955 -- # kill 148566 00:26:56.095 14:28:47 -- common/autotest_common.sh@960 -- # wait 148566 00:26:56.355 14:28:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:26:56.355 14:28:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:26:56.355 00:26:56.355 real 0m3.886s 00:26:56.355 user 0m13.631s 00:26:56.355 sys 0m0.620s 00:26:56.355 14:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:56.355 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:26:56.355 ************************************ 00:26:56.355 END TEST bdev_nvme_reset_stuck_adm_cmd 00:26:56.355 ************************************ 00:26:56.355 14:28:48 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:26:56.355 14:28:48 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:26:56.355 14:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:56.355 14:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.355 14:28:48 -- common/autotest_common.sh@10 -- # set +x 00:26:56.355 ************************************ 00:26:56.355 START TEST nvme_fio 00:26:56.355 ************************************ 00:26:56.355 14:28:48 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:26:56.355 14:28:48 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:56.355 14:28:48 -- nvme/nvme.sh@32 -- # ran_fio=false 00:26:56.355 14:28:48 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:26:56.355 14:28:48 -- common/autotest_common.sh@1508 -- # bdfs=() 00:26:56.355 14:28:48 -- common/autotest_common.sh@1508 -- # local bdfs 00:26:56.355 14:28:48 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:56.355 14:28:48 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:56.355 14:28:48 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:26:56.613 14:28:48 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:26:56.613 14:28:48 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:26:56.613 14:28:48 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:26:56.613 14:28:48 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:26:56.613 14:28:48 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:26:56.613 14:28:48 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:56.613 14:28:48 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:26:56.613 14:28:48 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:26:56.613 14:28:48 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:26:56.872 14:28:48 -- nvme/nvme.sh@41 -- # bs=4096 00:26:56.872 14:28:48 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:26:56.872 14:28:48 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:26:56.872 14:28:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:56.872 14:28:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:56.872 14:28:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:56.872 14:28:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:56.872 14:28:48 -- common/autotest_common.sh@1330 -- # shift 00:26:56.872 14:28:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:56.872 14:28:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:56.872 14:28:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:56.872 14:28:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:56.872 14:28:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:56.872 14:28:48 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:26:56.872 14:28:48 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:26:56.872 14:28:48 -- common/autotest_common.sh@1336 -- # break 00:26:56.872 14:28:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:56.872 14:28:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:26:57.132 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:57.132 fio-3.35 00:26:57.132 Starting 1 thread 00:27:00.423 00:27:00.423 test: (groupid=0, jobs=1): err= 0: pid=148729: Mon Nov 18 14:28:52 2024 00:27:00.423 read: IOPS=15.9k, BW=62.0MiB/s (65.0MB/s)(124MiB/2001msec) 00:27:00.423 slat (usec): min=3, max=449, avg= 5.65, stdev= 4.35 00:27:00.423 clat (usec): min=277, max=9898, avg=4007.95, stdev=255.33 00:27:00.423 lat (usec): min=282, max=9999, avg=4013.60, stdev=255.65 00:27:00.423 clat percentiles (usec): 00:27:00.423 | 1.00th=[ 3654], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3884], 00:27:00.423 | 30.00th=[ 3916], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:27:00.423 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4293], 00:27:00.423 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 7242], 99.95th=[ 8094], 00:27:00.423 | 99.99th=[ 9372] 00:27:00.423 bw ( KiB/s): min=62112, max=64864, per=100.00%, avg=63573.33, stdev=1383.92, samples=3 00:27:00.423 iops : min=15528, max=16216, avg=15893.33, stdev=345.98, samples=3 00:27:00.423 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(124MiB/2001msec); 0 zone resets 00:27:00.423 slat (nsec): min=3684, max=49459, avg=5760.20, stdev=3511.21 00:27:00.423 clat (usec): min=245, max=9584, avg=4023.53, stdev=257.04 00:27:00.423 lat (usec): min=249, max=9613, avg=4029.29, stdev=257.29 00:27:00.423 clat percentiles (usec): 00:27:00.423 | 1.00th=[ 3654], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3884], 00:27:00.423 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:27:00.423 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4293], 00:27:00.423 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 7439], 99.95th=[ 8160], 00:27:00.423 | 99.99th=[ 9372] 00:27:00.423 bw ( KiB/s): min=62640, max=64400, per=99.52%, avg=63288.00, stdev=967.40, samples=3 00:27:00.423 iops : min=15660, max=16100, avg=15822.00, stdev=241.85, samples=3 00:27:00.423 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:27:00.423 lat (msec) : 2=0.05%, 4=50.75%, 10=49.16% 00:27:00.423 cpu : usr=99.95%, sys=0.00%, ctx=12, majf=0, minf=39 00:27:00.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:00.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:00.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:00.423 issued rwts: total=31778,31813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:00.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:00.423 00:27:00.423 Run status group 0 (all jobs): 00:27:00.423 READ: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=124MiB (130MB), run=2001-2001msec 00:27:00.423 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=124MiB (130MB), run=2001-2001msec 00:27:00.423 ----------------------------------------------------- 00:27:00.423 Suppressions used: 00:27:00.423 count bytes template 00:27:00.423 1 32 /usr/src/fio/parse.c 00:27:00.423 ----------------------------------------------------- 00:27:00.423 00:27:00.423 14:28:52 -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:00.423 14:28:52 -- nvme/nvme.sh@46 -- # true 00:27:00.423 00:27:00.423 real 0m4.030s 00:27:00.423 user 0m3.397s 00:27:00.423 sys 0m0.319s 00:27:00.423 14:28:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.423 14:28:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.423 ************************************ 00:27:00.423 END TEST nvme_fio 00:27:00.423 ************************************ 00:27:00.423 00:27:00.423 real 0m44.815s 00:27:00.423 user 1m57.095s 00:27:00.423 sys 0m7.428s 00:27:00.423 14:28:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.423 14:28:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.423 ************************************ 00:27:00.423 END TEST nvme 00:27:00.423 ************************************ 00:27:00.682 14:28:52 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:27:00.682 14:28:52 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:00.682 14:28:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.682 14:28:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.682 14:28:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.682 ************************************ 00:27:00.682 START TEST nvme_scc 00:27:00.682 ************************************ 00:27:00.682 14:28:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:00.682 * Looking for test storage... 00:27:00.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:00.682 14:28:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:00.682 14:28:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:00.683 14:28:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:00.683 14:28:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:00.683 14:28:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:00.683 14:28:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:00.683 14:28:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:00.683 14:28:52 -- scripts/common.sh@335 -- # IFS=.-: 00:27:00.683 14:28:52 -- scripts/common.sh@335 -- # read -ra ver1 00:27:00.683 14:28:52 -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.683 14:28:52 -- scripts/common.sh@336 -- # read -ra ver2 00:27:00.683 14:28:52 -- scripts/common.sh@337 -- # local 'op=<' 00:27:00.683 14:28:52 -- scripts/common.sh@339 -- # ver1_l=2 00:27:00.683 14:28:52 -- scripts/common.sh@340 -- # ver2_l=1 00:27:00.683 14:28:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:00.683 14:28:52 -- scripts/common.sh@343 -- # case "$op" in 00:27:00.683 14:28:52 -- scripts/common.sh@344 -- # : 1 00:27:00.683 14:28:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:00.683 14:28:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.683 14:28:52 -- scripts/common.sh@364 -- # decimal 1 00:27:00.683 14:28:52 -- scripts/common.sh@352 -- # local d=1 00:27:00.683 14:28:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.683 14:28:52 -- scripts/common.sh@354 -- # echo 1 00:27:00.683 14:28:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:00.683 14:28:52 -- scripts/common.sh@365 -- # decimal 2 00:27:00.683 14:28:52 -- scripts/common.sh@352 -- # local d=2 00:27:00.683 14:28:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.683 14:28:52 -- scripts/common.sh@354 -- # echo 2 00:27:00.683 14:28:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:00.683 14:28:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:00.683 14:28:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:00.683 14:28:52 -- scripts/common.sh@367 -- # return 0 00:27:00.683 14:28:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.683 14:28:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:00.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.683 --rc genhtml_branch_coverage=1 00:27:00.683 --rc genhtml_function_coverage=1 00:27:00.683 --rc genhtml_legend=1 00:27:00.683 --rc geninfo_all_blocks=1 00:27:00.683 --rc geninfo_unexecuted_blocks=1 00:27:00.683 00:27:00.683 ' 00:27:00.683 14:28:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:00.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.683 --rc genhtml_branch_coverage=1 00:27:00.683 --rc genhtml_function_coverage=1 00:27:00.683 --rc genhtml_legend=1 00:27:00.683 --rc geninfo_all_blocks=1 00:27:00.683 --rc geninfo_unexecuted_blocks=1 00:27:00.683 00:27:00.683 ' 00:27:00.683 14:28:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:00.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.683 --rc genhtml_branch_coverage=1 00:27:00.683 --rc genhtml_function_coverage=1 00:27:00.683 --rc genhtml_legend=1 00:27:00.683 --rc geninfo_all_blocks=1 00:27:00.683 --rc geninfo_unexecuted_blocks=1 00:27:00.683 00:27:00.683 ' 00:27:00.683 14:28:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:00.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.683 --rc genhtml_branch_coverage=1 00:27:00.683 --rc genhtml_function_coverage=1 00:27:00.683 --rc genhtml_legend=1 00:27:00.683 --rc geninfo_all_blocks=1 00:27:00.683 --rc geninfo_unexecuted_blocks=1 00:27:00.683 00:27:00.683 ' 00:27:00.683 14:28:52 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:00.683 14:28:52 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:00.683 14:28:52 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:27:00.683 14:28:52 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:00.683 14:28:52 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:00.683 14:28:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.683 14:28:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.683 14:28:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.683 14:28:52 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.683 14:28:52 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.683 14:28:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.683 14:28:52 -- paths/export.sh@5 -- # export PATH 00:27:00.683 14:28:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.683 14:28:52 -- nvme/functions.sh@10 -- # ctrls=() 00:27:00.683 14:28:52 -- nvme/functions.sh@10 -- # declare -A ctrls 00:27:00.683 14:28:52 -- nvme/functions.sh@11 -- # nvmes=() 00:27:00.683 14:28:52 -- nvme/functions.sh@11 -- # declare -A nvmes 00:27:00.683 14:28:52 -- nvme/functions.sh@12 -- # bdfs=() 00:27:00.683 14:28:52 -- nvme/functions.sh@12 -- # declare -A bdfs 00:27:00.683 14:28:52 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:27:00.683 14:28:52 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:27:00.683 14:28:52 -- nvme/functions.sh@14 -- # nvme_name= 00:27:00.683 14:28:52 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.683 14:28:52 -- nvme/nvme_scc.sh@12 -- # uname 00:27:00.683 14:28:52 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:27:00.683 14:28:52 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:27:00.683 14:28:52 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:00.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:01.204 Waiting for block devices as requested 00:27:01.204 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:01.204 14:28:53 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:27:01.204 14:28:53 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:27:01.204 14:28:53 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:01.204 14:28:53 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:27:01.204 14:28:53 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:27:01.204 14:28:53 -- scripts/common.sh@15 -- # local i 00:27:01.204 14:28:53 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:27:01.204 14:28:53 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:01.204 14:28:53 -- scripts/common.sh@24 -- # return 0 00:27:01.204 14:28:53 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:27:01.204 14:28:53 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:27:01.204 14:28:53 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@18 -- # shift 00:27:01.204 14:28:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.204 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:27:01.204 14:28:53 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.204 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.205 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:27:01.205 14:28:53 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:27:01.205 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:01.206 14:28:53 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.206 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.206 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:27:01.207 14:28:53 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:27:01.207 14:28:53 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:27:01.207 14:28:53 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:27:01.207 14:28:53 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@18 -- # shift 00:27:01.207 14:28:53 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.207 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:27:01.207 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:27:01.207 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:01.208 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.208 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.208 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.488 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.488 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.488 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.488 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.488 14:28:53 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:01.488 14:28:53 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # IFS=: 00:27:01.488 14:28:53 -- nvme/functions.sh@21 -- # read -r reg val 00:27:01.488 14:28:53 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:27:01.488 14:28:53 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:27:01.488 14:28:53 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:27:01.488 14:28:53 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:27:01.488 14:28:53 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:27:01.488 14:28:53 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:27:01.488 14:28:53 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:27:01.488 14:28:53 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:27:01.488 14:28:53 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:27:01.488 14:28:53 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:27:01.488 14:28:53 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:27:01.488 14:28:53 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:27:01.488 14:28:53 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:27:01.488 14:28:53 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:27:01.488 14:28:53 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:27:01.488 14:28:53 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:01.488 14:28:53 -- nvme/functions.sh@76 -- # echo 0x15d 00:27:01.488 14:28:53 -- nvme/functions.sh@184 -- # oncs=0x15d 00:27:01.488 14:28:53 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:27:01.488 14:28:53 -- nvme/functions.sh@197 -- # echo nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:27:01.488 14:28:53 -- nvme/functions.sh@206 -- # echo nvme0 00:27:01.488 14:28:53 -- nvme/functions.sh@207 -- # return 0 00:27:01.488 14:28:53 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:27:01.488 14:28:53 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:27:01.488 14:28:53 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:01.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:01.759 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:03.138 14:28:54 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:27:03.138 14:28:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:27:03.138 14:28:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.138 14:28:54 -- common/autotest_common.sh@10 -- # set +x 00:27:03.138 ************************************ 00:27:03.138 START TEST nvme_simple_copy 00:27:03.138 ************************************ 00:27:03.138 14:28:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:27:03.138 Initializing NVMe Controllers 00:27:03.138 Attaching to 0000:00:06.0 00:27:03.138 Controller supports SCC. Attached to 0000:00:06.0 00:27:03.138 Namespace ID: 1 size: 5GB 00:27:03.138 Initialization complete. 00:27:03.138 00:27:03.138 Controller QEMU NVMe Ctrl (12340 ) 00:27:03.138 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:27:03.138 Namespace Block Size:4096 00:27:03.138 Writing LBAs 0 to 63 with Random Data 00:27:03.138 Copied LBAs from 0 - 63 to the Destination LBA 256 00:27:03.138 LBAs matching Written Data: 64 00:27:03.138 00:27:03.138 real 0m0.270s 00:27:03.138 user 0m0.116s 00:27:03.138 sys 0m0.054s 00:27:03.138 14:28:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.138 14:28:55 -- common/autotest_common.sh@10 -- # set +x 00:27:03.138 ************************************ 00:27:03.138 END TEST nvme_simple_copy 00:27:03.138 ************************************ 00:27:03.138 00:27:03.138 real 0m2.685s 00:27:03.138 user 0m0.867s 00:27:03.138 sys 0m1.729s 00:27:03.138 14:28:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.138 14:28:55 -- common/autotest_common.sh@10 -- # set +x 00:27:03.138 ************************************ 00:27:03.138 END TEST nvme_scc 00:27:03.138 ************************************ 00:27:03.398 14:28:55 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:27:03.398 14:28:55 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:27:03.398 14:28:55 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:27:03.398 14:28:55 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:27:03.398 14:28:55 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:27:03.398 14:28:55 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:27:03.398 14:28:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.398 14:28:55 -- common/autotest_common.sh@10 -- # set +x 00:27:03.398 ************************************ 00:27:03.398 START TEST nvme_rpc 00:27:03.398 ************************************ 00:27:03.398 14:28:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:27:03.398 * Looking for test storage... 00:27:03.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:03.398 14:28:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:03.398 14:28:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:03.398 14:28:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:03.398 14:28:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:03.398 14:28:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:03.398 14:28:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:03.398 14:28:55 -- scripts/common.sh@335 -- # IFS=.-: 00:27:03.398 14:28:55 -- scripts/common.sh@335 -- # read -ra ver1 00:27:03.398 14:28:55 -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.398 14:28:55 -- scripts/common.sh@336 -- # read -ra ver2 00:27:03.398 14:28:55 -- scripts/common.sh@337 -- # local 'op=<' 00:27:03.398 14:28:55 -- scripts/common.sh@339 -- # ver1_l=2 00:27:03.398 14:28:55 -- scripts/common.sh@340 -- # ver2_l=1 00:27:03.398 14:28:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:03.398 14:28:55 -- scripts/common.sh@343 -- # case "$op" in 00:27:03.398 14:28:55 -- scripts/common.sh@344 -- # : 1 00:27:03.398 14:28:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:03.398 14:28:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.398 14:28:55 -- scripts/common.sh@364 -- # decimal 1 00:27:03.398 14:28:55 -- scripts/common.sh@352 -- # local d=1 00:27:03.398 14:28:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.398 14:28:55 -- scripts/common.sh@354 -- # echo 1 00:27:03.398 14:28:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:03.398 14:28:55 -- scripts/common.sh@365 -- # decimal 2 00:27:03.398 14:28:55 -- scripts/common.sh@352 -- # local d=2 00:27:03.398 14:28:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.398 14:28:55 -- scripts/common.sh@354 -- # echo 2 00:27:03.398 14:28:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:03.398 14:28:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:03.398 14:28:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:03.398 14:28:55 -- scripts/common.sh@367 -- # return 0 00:27:03.398 14:28:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:03.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.398 --rc genhtml_branch_coverage=1 00:27:03.398 --rc genhtml_function_coverage=1 00:27:03.398 --rc genhtml_legend=1 00:27:03.398 --rc geninfo_all_blocks=1 00:27:03.398 --rc geninfo_unexecuted_blocks=1 00:27:03.398 00:27:03.398 ' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:03.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.398 --rc genhtml_branch_coverage=1 00:27:03.398 --rc genhtml_function_coverage=1 00:27:03.398 --rc genhtml_legend=1 00:27:03.398 --rc geninfo_all_blocks=1 00:27:03.398 --rc geninfo_unexecuted_blocks=1 00:27:03.398 00:27:03.398 ' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:03.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.398 --rc genhtml_branch_coverage=1 00:27:03.398 --rc genhtml_function_coverage=1 00:27:03.398 --rc genhtml_legend=1 00:27:03.398 --rc geninfo_all_blocks=1 00:27:03.398 --rc geninfo_unexecuted_blocks=1 00:27:03.398 00:27:03.398 ' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:03.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.398 --rc genhtml_branch_coverage=1 00:27:03.398 --rc genhtml_function_coverage=1 00:27:03.398 --rc genhtml_legend=1 00:27:03.398 --rc geninfo_all_blocks=1 00:27:03.398 --rc geninfo_unexecuted_blocks=1 00:27:03.398 00:27:03.398 ' 00:27:03.398 14:28:55 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.398 14:28:55 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:27:03.398 14:28:55 -- common/autotest_common.sh@1519 -- # bdfs=() 00:27:03.398 14:28:55 -- common/autotest_common.sh@1519 -- # local bdfs 00:27:03.398 14:28:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:27:03.398 14:28:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:27:03.398 14:28:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:27:03.398 14:28:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:27:03.398 14:28:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:03.398 14:28:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:27:03.398 14:28:55 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:03.657 14:28:55 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:27:03.657 14:28:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:27:03.657 14:28:55 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:27:03.657 14:28:55 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:27:03.657 14:28:55 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=149230 00:27:03.657 14:28:55 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:27:03.657 14:28:55 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:27:03.657 14:28:55 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 149230 00:27:03.657 14:28:55 -- common/autotest_common.sh@829 -- # '[' -z 149230 ']' 00:27:03.657 14:28:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.657 14:28:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.657 14:28:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.657 14:28:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.657 14:28:55 -- common/autotest_common.sh@10 -- # set +x 00:27:03.657 [2024-11-18 14:28:55.532015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:03.657 [2024-11-18 14:28:55.532195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149230 ] 00:27:03.657 [2024-11-18 14:28:55.687288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:03.917 [2024-11-18 14:28:55.766041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:03.917 [2024-11-18 14:28:55.766712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.917 [2024-11-18 14:28:55.766719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.484 14:28:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.484 14:28:56 -- common/autotest_common.sh@862 -- # return 0 00:27:04.484 14:28:56 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:27:04.743 Nvme0n1 00:27:05.001 14:28:56 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:27:05.002 14:28:56 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:27:05.002 request: 00:27:05.002 { 00:27:05.002 "filename": "non_existing_file", 00:27:05.002 "bdev_name": "Nvme0n1", 00:27:05.002 "method": "bdev_nvme_apply_firmware", 00:27:05.002 "req_id": 1 00:27:05.002 } 00:27:05.002 Got JSON-RPC error response 00:27:05.002 response: 00:27:05.002 { 00:27:05.002 "code": -32603, 00:27:05.002 "message": "open file failed." 00:27:05.002 } 00:27:05.002 14:28:57 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:27:05.002 14:28:57 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:27:05.002 14:28:57 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:05.261 14:28:57 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:05.261 14:28:57 -- nvme/nvme_rpc.sh@40 -- # killprocess 149230 00:27:05.261 14:28:57 -- common/autotest_common.sh@936 -- # '[' -z 149230 ']' 00:27:05.261 14:28:57 -- common/autotest_common.sh@940 -- # kill -0 149230 00:27:05.261 14:28:57 -- common/autotest_common.sh@941 -- # uname 00:27:05.261 14:28:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:05.261 14:28:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149230 00:27:05.261 14:28:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:05.261 14:28:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:05.261 14:28:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149230' 00:27:05.261 killing process with pid 149230 00:27:05.261 14:28:57 -- common/autotest_common.sh@955 -- # kill 149230 00:27:05.261 14:28:57 -- common/autotest_common.sh@960 -- # wait 149230 00:27:05.829 00:27:05.829 real 0m2.472s 00:27:05.829 user 0m4.907s 00:27:05.829 sys 0m0.587s 00:27:05.829 14:28:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.829 14:28:57 -- common/autotest_common.sh@10 -- # set +x 00:27:05.829 ************************************ 00:27:05.829 END TEST nvme_rpc 00:27:05.829 ************************************ 00:27:05.829 14:28:57 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:27:05.829 14:28:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:05.829 14:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.830 14:28:57 -- common/autotest_common.sh@10 -- # set +x 00:27:05.830 ************************************ 00:27:05.830 START TEST nvme_rpc_timeouts 00:27:05.830 ************************************ 00:27:05.830 14:28:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:27:05.830 * Looking for test storage... 00:27:05.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:05.830 14:28:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:05.830 14:28:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:05.830 14:28:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:06.089 14:28:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:06.089 14:28:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:06.089 14:28:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:06.089 14:28:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:06.089 14:28:57 -- scripts/common.sh@335 -- # IFS=.-: 00:27:06.089 14:28:57 -- scripts/common.sh@335 -- # read -ra ver1 00:27:06.089 14:28:57 -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.089 14:28:57 -- scripts/common.sh@336 -- # read -ra ver2 00:27:06.089 14:28:57 -- scripts/common.sh@337 -- # local 'op=<' 00:27:06.089 14:28:57 -- scripts/common.sh@339 -- # ver1_l=2 00:27:06.089 14:28:57 -- scripts/common.sh@340 -- # ver2_l=1 00:27:06.089 14:28:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:06.089 14:28:57 -- scripts/common.sh@343 -- # case "$op" in 00:27:06.089 14:28:57 -- scripts/common.sh@344 -- # : 1 00:27:06.089 14:28:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:06.089 14:28:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.089 14:28:57 -- scripts/common.sh@364 -- # decimal 1 00:27:06.089 14:28:57 -- scripts/common.sh@352 -- # local d=1 00:27:06.089 14:28:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.089 14:28:57 -- scripts/common.sh@354 -- # echo 1 00:27:06.089 14:28:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:06.089 14:28:57 -- scripts/common.sh@365 -- # decimal 2 00:27:06.089 14:28:57 -- scripts/common.sh@352 -- # local d=2 00:27:06.089 14:28:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.089 14:28:57 -- scripts/common.sh@354 -- # echo 2 00:27:06.089 14:28:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:06.089 14:28:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:06.089 14:28:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:06.089 14:28:57 -- scripts/common.sh@367 -- # return 0 00:27:06.089 14:28:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.089 14:28:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.089 --rc genhtml_branch_coverage=1 00:27:06.089 --rc genhtml_function_coverage=1 00:27:06.089 --rc genhtml_legend=1 00:27:06.089 --rc geninfo_all_blocks=1 00:27:06.089 --rc geninfo_unexecuted_blocks=1 00:27:06.089 00:27:06.089 ' 00:27:06.089 14:28:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.089 --rc genhtml_branch_coverage=1 00:27:06.089 --rc genhtml_function_coverage=1 00:27:06.089 --rc genhtml_legend=1 00:27:06.089 --rc geninfo_all_blocks=1 00:27:06.089 --rc geninfo_unexecuted_blocks=1 00:27:06.089 00:27:06.089 ' 00:27:06.089 14:28:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.089 --rc genhtml_branch_coverage=1 00:27:06.089 --rc genhtml_function_coverage=1 00:27:06.089 --rc genhtml_legend=1 00:27:06.089 --rc geninfo_all_blocks=1 00:27:06.089 --rc geninfo_unexecuted_blocks=1 00:27:06.089 00:27:06.089 ' 00:27:06.089 14:28:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.089 --rc genhtml_branch_coverage=1 00:27:06.089 --rc genhtml_function_coverage=1 00:27:06.089 --rc genhtml_legend=1 00:27:06.089 --rc geninfo_all_blocks=1 00:27:06.089 --rc geninfo_unexecuted_blocks=1 00:27:06.089 00:27:06.089 ' 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_149292 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_149292 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=149324 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:27:06.089 14:28:57 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 149324 00:27:06.089 14:28:57 -- common/autotest_common.sh@829 -- # '[' -z 149324 ']' 00:27:06.089 14:28:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.089 14:28:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.089 14:28:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.089 14:28:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.089 14:28:57 -- common/autotest_common.sh@10 -- # set +x 00:27:06.089 [2024-11-18 14:28:57.972625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:06.089 [2024-11-18 14:28:57.973065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149324 ] 00:27:06.089 [2024-11-18 14:28:58.111842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.348 [2024-11-18 14:28:58.170301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:06.348 [2024-11-18 14:28:58.170920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.348 [2024-11-18 14:28:58.170914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.916 14:28:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:06.916 14:28:58 -- common/autotest_common.sh@862 -- # return 0 00:27:06.916 14:28:58 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:27:06.916 Checking default timeout settings: 00:27:06.916 14:28:58 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:07.174 14:28:59 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:27:07.174 Making settings changes with rpc: 00:27:07.174 14:28:59 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:27:07.433 14:28:59 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:27:07.433 Check default vs. modified settings: 00:27:07.433 14:28:59 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:27:08.000 Setting action_on_timeout is changed as expected. 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:27:08.000 Setting timeout_us is changed as expected. 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:27:08.000 Setting timeout_admin_us is changed as expected. 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_149292 /tmp/settings_modified_149292 00:27:08.000 14:28:59 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 149324 00:27:08.000 14:28:59 -- common/autotest_common.sh@936 -- # '[' -z 149324 ']' 00:27:08.000 14:28:59 -- common/autotest_common.sh@940 -- # kill -0 149324 00:27:08.000 14:28:59 -- common/autotest_common.sh@941 -- # uname 00:27:08.000 14:28:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:08.001 14:28:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149324 00:27:08.001 14:28:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:08.001 14:28:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:08.001 14:28:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149324' 00:27:08.001 killing process with pid 149324 00:27:08.001 14:28:59 -- common/autotest_common.sh@955 -- # kill 149324 00:27:08.001 14:28:59 -- common/autotest_common.sh@960 -- # wait 149324 00:27:08.260 RPC TIMEOUT SETTING TEST PASSED. 00:27:08.260 14:29:00 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:27:08.260 00:27:08.260 real 0m2.488s 00:27:08.260 user 0m5.054s 00:27:08.260 sys 0m0.524s 00:27:08.260 14:29:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:08.260 14:29:00 -- common/autotest_common.sh@10 -- # set +x 00:27:08.260 ************************************ 00:27:08.260 END TEST nvme_rpc_timeouts 00:27:08.260 ************************************ 00:27:08.260 14:29:00 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:27:08.260 14:29:00 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@255 -- # timing_exit lib 00:27:08.260 14:29:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.260 14:29:00 -- common/autotest_common.sh@10 -- # set +x 00:27:08.260 14:29:00 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:08.260 14:29:00 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:08.260 14:29:00 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:08.260 14:29:00 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:08.260 14:29:00 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:27:08.260 14:29:00 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:27:08.260 14:29:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:08.260 14:29:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.260 14:29:00 -- common/autotest_common.sh@10 -- # set +x 00:27:08.519 ************************************ 00:27:08.519 START TEST blockdev_raid5f 00:27:08.519 ************************************ 00:27:08.519 14:29:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:27:08.519 * Looking for test storage... 00:27:08.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:08.519 14:29:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:08.519 14:29:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:08.519 14:29:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:08.519 14:29:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:08.519 14:29:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:08.519 14:29:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:08.519 14:29:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:08.519 14:29:00 -- scripts/common.sh@335 -- # IFS=.-: 00:27:08.519 14:29:00 -- scripts/common.sh@335 -- # read -ra ver1 00:27:08.519 14:29:00 -- scripts/common.sh@336 -- # IFS=.-: 00:27:08.519 14:29:00 -- scripts/common.sh@336 -- # read -ra ver2 00:27:08.519 14:29:00 -- scripts/common.sh@337 -- # local 'op=<' 00:27:08.519 14:29:00 -- scripts/common.sh@339 -- # ver1_l=2 00:27:08.519 14:29:00 -- scripts/common.sh@340 -- # ver2_l=1 00:27:08.519 14:29:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:08.519 14:29:00 -- scripts/common.sh@343 -- # case "$op" in 00:27:08.519 14:29:00 -- scripts/common.sh@344 -- # : 1 00:27:08.519 14:29:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:08.519 14:29:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:08.519 14:29:00 -- scripts/common.sh@364 -- # decimal 1 00:27:08.519 14:29:00 -- scripts/common.sh@352 -- # local d=1 00:27:08.519 14:29:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:08.519 14:29:00 -- scripts/common.sh@354 -- # echo 1 00:27:08.519 14:29:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:08.519 14:29:00 -- scripts/common.sh@365 -- # decimal 2 00:27:08.519 14:29:00 -- scripts/common.sh@352 -- # local d=2 00:27:08.519 14:29:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:08.519 14:29:00 -- scripts/common.sh@354 -- # echo 2 00:27:08.519 14:29:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:08.519 14:29:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:08.519 14:29:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:08.519 14:29:00 -- scripts/common.sh@367 -- # return 0 00:27:08.519 14:29:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:08.519 14:29:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.520 --rc genhtml_branch_coverage=1 00:27:08.520 --rc genhtml_function_coverage=1 00:27:08.520 --rc genhtml_legend=1 00:27:08.520 --rc geninfo_all_blocks=1 00:27:08.520 --rc geninfo_unexecuted_blocks=1 00:27:08.520 00:27:08.520 ' 00:27:08.520 14:29:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.520 --rc genhtml_branch_coverage=1 00:27:08.520 --rc genhtml_function_coverage=1 00:27:08.520 --rc genhtml_legend=1 00:27:08.520 --rc geninfo_all_blocks=1 00:27:08.520 --rc geninfo_unexecuted_blocks=1 00:27:08.520 00:27:08.520 ' 00:27:08.520 14:29:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.520 --rc genhtml_branch_coverage=1 00:27:08.520 --rc genhtml_function_coverage=1 00:27:08.520 --rc genhtml_legend=1 00:27:08.520 --rc geninfo_all_blocks=1 00:27:08.520 --rc geninfo_unexecuted_blocks=1 00:27:08.520 00:27:08.520 ' 00:27:08.520 14:29:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:08.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:08.520 --rc genhtml_branch_coverage=1 00:27:08.520 --rc genhtml_function_coverage=1 00:27:08.520 --rc genhtml_legend=1 00:27:08.520 --rc geninfo_all_blocks=1 00:27:08.520 --rc geninfo_unexecuted_blocks=1 00:27:08.520 00:27:08.520 ' 00:27:08.520 14:29:00 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:08.520 14:29:00 -- bdev/nbd_common.sh@6 -- # set -e 00:27:08.520 14:29:00 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:08.520 14:29:00 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:08.520 14:29:00 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:08.520 14:29:00 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:08.520 14:29:00 -- bdev/blockdev.sh@18 -- # : 00:27:08.520 14:29:00 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:08.520 14:29:00 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:08.520 14:29:00 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:08.520 14:29:00 -- bdev/blockdev.sh@672 -- # uname -s 00:27:08.520 14:29:00 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:08.520 14:29:00 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:08.520 14:29:00 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:27:08.520 14:29:00 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:08.520 14:29:00 -- bdev/blockdev.sh@682 -- # dek= 00:27:08.520 14:29:00 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:08.520 14:29:00 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:08.520 14:29:00 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:08.520 14:29:00 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:27:08.520 14:29:00 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:27:08.520 14:29:00 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:08.520 14:29:00 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=149467 00:27:08.520 14:29:00 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:08.520 14:29:00 -- bdev/blockdev.sh@47 -- # waitforlisten 149467 00:27:08.520 14:29:00 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:08.520 14:29:00 -- common/autotest_common.sh@829 -- # '[' -z 149467 ']' 00:27:08.520 14:29:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.520 14:29:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.520 14:29:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.520 14:29:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.520 14:29:00 -- common/autotest_common.sh@10 -- # set +x 00:27:08.520 [2024-11-18 14:29:00.577099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:08.520 [2024-11-18 14:29:00.577718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149467 ] 00:27:08.779 [2024-11-18 14:29:00.716431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.779 [2024-11-18 14:29:00.777681] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:08.779 [2024-11-18 14:29:00.778169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.717 14:29:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.717 14:29:01 -- common/autotest_common.sh@862 -- # return 0 00:27:09.717 14:29:01 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:09.717 14:29:01 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:27:09.717 14:29:01 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:27:09.717 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.717 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.717 Malloc0 00:27:09.717 Malloc1 00:27:09.717 Malloc2 00:27:09.717 14:29:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.717 14:29:01 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:09.717 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.717 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.717 14:29:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.717 14:29:01 -- bdev/blockdev.sh@738 -- # cat 00:27:09.718 14:29:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:09.718 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.718 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.718 14:29:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.718 14:29:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:09.718 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.718 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.718 14:29:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.718 14:29:01 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:09.718 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.718 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.718 14:29:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.718 14:29:01 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:09.718 14:29:01 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:09.718 14:29:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.718 14:29:01 -- common/autotest_common.sh@10 -- # set +x 00:27:09.718 14:29:01 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:09.718 14:29:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.718 14:29:01 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:09.718 14:29:01 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:09.718 14:29:01 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4b804d29-6316-4c2d-b686-59fde19da996"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4b804d29-6316-4c2d-b686-59fde19da996",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4b804d29-6316-4c2d-b686-59fde19da996",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c0828199-64a7-4263-953b-66983bfc08b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e18ec419-56bc-409d-9c60-8640c0cc7129",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b187a796-0519-4253-a638-3b8ee1f8fb32",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:27:09.718 14:29:01 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:09.718 14:29:01 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:27:09.718 14:29:01 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:09.718 14:29:01 -- bdev/blockdev.sh@752 -- # killprocess 149467 00:27:09.718 14:29:01 -- common/autotest_common.sh@936 -- # '[' -z 149467 ']' 00:27:09.718 14:29:01 -- common/autotest_common.sh@940 -- # kill -0 149467 00:27:09.718 14:29:01 -- common/autotest_common.sh@941 -- # uname 00:27:09.718 14:29:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:09.718 14:29:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149467 00:27:09.718 14:29:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:09.718 14:29:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:09.718 killing process with pid 149467 00:27:09.718 14:29:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149467' 00:27:09.718 14:29:01 -- common/autotest_common.sh@955 -- # kill 149467 00:27:09.718 14:29:01 -- common/autotest_common.sh@960 -- # wait 149467 00:27:10.286 14:29:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:10.286 14:29:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:27:10.286 14:29:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:10.286 14:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:10.286 14:29:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.286 ************************************ 00:27:10.286 START TEST bdev_hello_world 00:27:10.286 ************************************ 00:27:10.286 14:29:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:27:10.286 [2024-11-18 14:29:02.176681] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:10.286 [2024-11-18 14:29:02.176851] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149510 ] 00:27:10.286 [2024-11-18 14:29:02.316902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.545 [2024-11-18 14:29:02.379613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.545 [2024-11-18 14:29:02.593133] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:10.545 [2024-11-18 14:29:02.593456] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:27:10.545 [2024-11-18 14:29:02.593623] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:10.545 [2024-11-18 14:29:02.594127] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:10.545 [2024-11-18 14:29:02.594399] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:10.545 [2024-11-18 14:29:02.594527] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:10.545 [2024-11-18 14:29:02.594783] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:10.545 00:27:10.545 [2024-11-18 14:29:02.594998] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:10.804 00:27:10.804 real 0m0.701s 00:27:10.804 user 0m0.367s 00:27:10.804 sys 0m0.217s 00:27:10.804 14:29:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:10.804 14:29:02 -- common/autotest_common.sh@10 -- # set +x 00:27:10.804 ************************************ 00:27:10.804 END TEST bdev_hello_world 00:27:10.804 ************************************ 00:27:11.063 14:29:02 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:11.063 14:29:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:11.063 14:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:11.063 14:29:02 -- common/autotest_common.sh@10 -- # set +x 00:27:11.063 ************************************ 00:27:11.063 START TEST bdev_bounds 00:27:11.063 ************************************ 00:27:11.063 14:29:02 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:27:11.063 14:29:02 -- bdev/blockdev.sh@288 -- # bdevio_pid=149548 00:27:11.063 14:29:02 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:11.063 Process bdevio pid: 149548 00:27:11.063 14:29:02 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 149548' 00:27:11.063 14:29:02 -- bdev/blockdev.sh@291 -- # waitforlisten 149548 00:27:11.063 14:29:02 -- common/autotest_common.sh@829 -- # '[' -z 149548 ']' 00:27:11.063 14:29:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.063 14:29:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.063 14:29:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.063 14:29:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.063 14:29:02 -- common/autotest_common.sh@10 -- # set +x 00:27:11.063 14:29:02 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:11.063 [2024-11-18 14:29:02.950209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:11.063 [2024-11-18 14:29:02.950673] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149548 ] 00:27:11.063 [2024-11-18 14:29:03.107699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:11.322 [2024-11-18 14:29:03.165891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.322 [2024-11-18 14:29:03.165949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.322 [2024-11-18 14:29:03.165953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.890 14:29:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.890 14:29:03 -- common/autotest_common.sh@862 -- # return 0 00:27:11.890 14:29:03 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:11.890 I/O targets: 00:27:11.890 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:27:11.890 00:27:11.890 00:27:11.890 CUnit - A unit testing framework for C - Version 2.1-3 00:27:11.890 http://cunit.sourceforge.net/ 00:27:11.890 00:27:11.890 00:27:11.890 Suite: bdevio tests on: raid5f 00:27:11.890 Test: blockdev write read block ...passed 00:27:11.890 Test: blockdev write zeroes read block ...passed 00:27:11.890 Test: blockdev write zeroes read no split ...passed 00:27:12.149 Test: blockdev write zeroes read split ...passed 00:27:12.149 Test: blockdev write zeroes read split partial ...passed 00:27:12.149 Test: blockdev reset ...passed 00:27:12.149 Test: blockdev write read 8 blocks ...passed 00:27:12.149 Test: blockdev write read size > 128k ...passed 00:27:12.149 Test: blockdev write read invalid size ...passed 00:27:12.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:12.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:12.149 Test: blockdev write read max offset ...passed 00:27:12.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:12.149 Test: blockdev writev readv 8 blocks ...passed 00:27:12.149 Test: blockdev writev readv 30 x 1block ...passed 00:27:12.149 Test: blockdev writev readv block ...passed 00:27:12.149 Test: blockdev writev readv size > 128k ...passed 00:27:12.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:12.149 Test: blockdev comparev and writev ...passed 00:27:12.149 Test: blockdev nvme passthru rw ...passed 00:27:12.149 Test: blockdev nvme passthru vendor specific ...passed 00:27:12.149 Test: blockdev nvme admin passthru ...passed 00:27:12.149 Test: blockdev copy ...passed 00:27:12.149 00:27:12.149 Run Summary: Type Total Ran Passed Failed Inactive 00:27:12.149 suites 1 1 n/a 0 0 00:27:12.149 tests 23 23 23 0 0 00:27:12.149 asserts 130 130 130 0 n/a 00:27:12.149 00:27:12.149 Elapsed time = 0.325 seconds 00:27:12.149 0 00:27:12.149 14:29:04 -- bdev/blockdev.sh@293 -- # killprocess 149548 00:27:12.149 14:29:04 -- common/autotest_common.sh@936 -- # '[' -z 149548 ']' 00:27:12.149 14:29:04 -- common/autotest_common.sh@940 -- # kill -0 149548 00:27:12.149 14:29:04 -- common/autotest_common.sh@941 -- # uname 00:27:12.149 14:29:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.149 14:29:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149548 00:27:12.149 14:29:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:12.149 14:29:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:12.149 14:29:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149548' 00:27:12.149 killing process with pid 149548 00:27:12.149 14:29:04 -- common/autotest_common.sh@955 -- # kill 149548 00:27:12.149 14:29:04 -- common/autotest_common.sh@960 -- # wait 149548 00:27:12.408 14:29:04 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:12.408 00:27:12.408 real 0m1.481s 00:27:12.408 user 0m3.667s 00:27:12.408 sys 0m0.326s 00:27:12.408 14:29:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:12.408 ************************************ 00:27:12.408 14:29:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.408 END TEST bdev_bounds 00:27:12.408 ************************************ 00:27:12.408 14:29:04 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:27:12.408 14:29:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:12.408 14:29:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.408 14:29:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.408 ************************************ 00:27:12.408 START TEST bdev_nbd 00:27:12.408 ************************************ 00:27:12.408 14:29:04 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:27:12.408 14:29:04 -- bdev/blockdev.sh@298 -- # uname -s 00:27:12.408 14:29:04 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:12.408 14:29:04 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:12.408 14:29:04 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:12.408 14:29:04 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:27:12.408 14:29:04 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:12.408 14:29:04 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:12.408 14:29:04 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:12.409 14:29:04 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:12.409 14:29:04 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:12.409 14:29:04 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:12.409 14:29:04 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:12.409 14:29:04 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:12.409 14:29:04 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:27:12.409 14:29:04 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:12.409 14:29:04 -- bdev/blockdev.sh@316 -- # nbd_pid=149605 00:27:12.409 14:29:04 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:12.409 14:29:04 -- bdev/blockdev.sh@318 -- # waitforlisten 149605 /var/tmp/spdk-nbd.sock 00:27:12.409 14:29:04 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:12.409 14:29:04 -- common/autotest_common.sh@829 -- # '[' -z 149605 ']' 00:27:12.409 14:29:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:12.409 14:29:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:12.409 14:29:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:12.409 14:29:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.409 14:29:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.668 [2024-11-18 14:29:04.484461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:12.668 [2024-11-18 14:29:04.484699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.668 [2024-11-18 14:29:04.632904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.668 [2024-11-18 14:29:04.687731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.605 14:29:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.605 14:29:05 -- common/autotest_common.sh@862 -- # return 0 00:27:13.605 14:29:05 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@24 -- # local i 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:13.605 14:29:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:27:13.864 14:29:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:13.864 14:29:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:13.864 14:29:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:13.864 14:29:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:13.864 14:29:05 -- common/autotest_common.sh@867 -- # local i 00:27:13.864 14:29:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:13.864 14:29:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:13.864 14:29:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:13.864 14:29:05 -- common/autotest_common.sh@871 -- # break 00:27:13.864 14:29:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:13.864 14:29:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:13.864 14:29:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:13.864 1+0 records in 00:27:13.864 1+0 records out 00:27:13.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341846 s, 12.0 MB/s 00:27:13.864 14:29:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.864 14:29:05 -- common/autotest_common.sh@884 -- # size=4096 00:27:13.864 14:29:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.864 14:29:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:13.864 14:29:05 -- common/autotest_common.sh@887 -- # return 0 00:27:13.864 14:29:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:13.864 14:29:05 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:13.864 14:29:05 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:14.123 { 00:27:14.123 "nbd_device": "/dev/nbd0", 00:27:14.123 "bdev_name": "raid5f" 00:27:14.123 } 00:27:14.123 ]' 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:14.123 { 00:27:14.123 "nbd_device": "/dev/nbd0", 00:27:14.123 "bdev_name": "raid5f" 00:27:14.123 } 00:27:14.123 ]' 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@51 -- # local i 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.123 14:29:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@41 -- # break 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.382 14:29:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:14.640 14:29:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:14.640 14:29:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:14.640 14:29:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:14.640 14:29:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:14.640 14:29:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@65 -- # true 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@65 -- # count=0 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@122 -- # count=0 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@127 -- # return 0 00:27:14.641 14:29:06 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@12 -- # local i 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:14.641 14:29:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:27:14.900 /dev/nbd0 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:14.900 14:29:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:14.900 14:29:06 -- common/autotest_common.sh@867 -- # local i 00:27:14.900 14:29:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:14.900 14:29:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:14.900 14:29:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:14.900 14:29:06 -- common/autotest_common.sh@871 -- # break 00:27:14.900 14:29:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:14.900 14:29:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:14.900 14:29:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:14.900 1+0 records in 00:27:14.900 1+0 records out 00:27:14.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597696 s, 6.9 MB/s 00:27:14.900 14:29:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.900 14:29:06 -- common/autotest_common.sh@884 -- # size=4096 00:27:14.900 14:29:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.900 14:29:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:14.900 14:29:06 -- common/autotest_common.sh@887 -- # return 0 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.900 14:29:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:15.159 { 00:27:15.159 "nbd_device": "/dev/nbd0", 00:27:15.159 "bdev_name": "raid5f" 00:27:15.159 } 00:27:15.159 ]' 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:15.159 { 00:27:15.159 "nbd_device": "/dev/nbd0", 00:27:15.159 "bdev_name": "raid5f" 00:27:15.159 } 00:27:15.159 ]' 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@65 -- # count=1 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@95 -- # count=1 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:15.159 256+0 records in 00:27:15.159 256+0 records out 00:27:15.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00972447 s, 108 MB/s 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:15.159 14:29:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:15.418 256+0 records in 00:27:15.418 256+0 records out 00:27:15.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304846 s, 34.4 MB/s 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@51 -- # local i 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@41 -- # break 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:15.418 14:29:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.419 14:29:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:15.678 14:29:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:15.678 14:29:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:15.678 14:29:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@65 -- # true 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@65 -- # count=0 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@104 -- # count=0 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@109 -- # return 0 00:27:15.937 14:29:07 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:15.937 14:29:07 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:16.196 malloc_lvol_verify 00:27:16.196 14:29:08 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:16.455 aad96727-c276-438f-9c63-b9bc9f0990d0 00:27:16.455 14:29:08 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:16.455 506bff4b-148f-431d-938c-cd5e694b76b2 00:27:16.455 14:29:08 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:16.713 /dev/nbd0 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:16.713 mke2fs 1.46.5 (30-Dec-2021) 00:27:16.713 00:27:16.713 Filesystem too small for a journal 00:27:16.713 Discarding device blocks: 0/1024 done 00:27:16.713 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:16.713 00:27:16.713 Allocating group tables: 0/1 done 00:27:16.713 Writing inode tables: 0/1 done 00:27:16.713 Writing superblocks and filesystem accounting information: 0/1 done 00:27:16.713 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@51 -- # local i 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:16.713 14:29:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@41 -- # break 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@45 -- # return 0 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:16.973 14:29:08 -- bdev/nbd_common.sh@147 -- # return 0 00:27:16.973 14:29:08 -- bdev/blockdev.sh@324 -- # killprocess 149605 00:27:16.973 14:29:08 -- common/autotest_common.sh@936 -- # '[' -z 149605 ']' 00:27:16.973 14:29:08 -- common/autotest_common.sh@940 -- # kill -0 149605 00:27:16.973 14:29:08 -- common/autotest_common.sh@941 -- # uname 00:27:16.973 14:29:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.973 14:29:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149605 00:27:16.973 14:29:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:16.973 14:29:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:16.973 14:29:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149605' 00:27:16.973 killing process with pid 149605 00:27:16.973 14:29:08 -- common/autotest_common.sh@955 -- # kill 149605 00:27:16.973 14:29:08 -- common/autotest_common.sh@960 -- # wait 149605 00:27:17.234 14:29:09 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:17.234 00:27:17.234 real 0m4.793s 00:27:17.234 user 0m7.415s 00:27:17.234 sys 0m1.063s 00:27:17.234 14:29:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:17.234 14:29:09 -- common/autotest_common.sh@10 -- # set +x 00:27:17.234 ************************************ 00:27:17.234 END TEST bdev_nbd 00:27:17.234 ************************************ 00:27:17.234 14:29:09 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:17.234 14:29:09 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:27:17.234 14:29:09 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:27:17.234 14:29:09 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.234 14:29:09 -- common/autotest_common.sh@10 -- # set +x 00:27:17.234 ************************************ 00:27:17.234 START TEST bdev_fio 00:27:17.234 ************************************ 00:27:17.234 14:29:09 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:27:17.234 14:29:09 -- bdev/blockdev.sh@329 -- # local env_context 00:27:17.234 14:29:09 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:27:17.234 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:27:17.234 14:29:09 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:27:17.234 14:29:09 -- bdev/blockdev.sh@337 -- # echo '' 00:27:17.234 14:29:09 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:27:17.234 14:29:09 -- bdev/blockdev.sh@337 -- # env_context= 00:27:17.234 14:29:09 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:17.234 14:29:09 -- common/autotest_common.sh@1270 -- # local workload=verify 00:27:17.234 14:29:09 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:27:17.234 14:29:09 -- common/autotest_common.sh@1272 -- # local env_context= 00:27:17.234 14:29:09 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:27:17.234 14:29:09 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:17.234 14:29:09 -- common/autotest_common.sh@1290 -- # cat 00:27:17.234 14:29:09 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1303 -- # cat 00:27:17.234 14:29:09 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:27:17.234 14:29:09 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:27:17.495 14:29:09 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:27:17.495 14:29:09 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:27:17.495 14:29:09 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:27:17.495 14:29:09 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:27:17.495 14:29:09 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:27:17.495 14:29:09 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:27:17.495 14:29:09 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.495 14:29:09 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:27:17.495 14:29:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:17.495 14:29:09 -- common/autotest_common.sh@10 -- # set +x 00:27:17.495 ************************************ 00:27:17.495 START TEST bdev_fio_rw_verify 00:27:17.495 ************************************ 00:27:17.495 14:29:09 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.495 14:29:09 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.495 14:29:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:17.495 14:29:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:17.495 14:29:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:17.495 14:29:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.495 14:29:09 -- common/autotest_common.sh@1330 -- # shift 00:27:17.495 14:29:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:17.495 14:29:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.495 14:29:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:17.495 14:29:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:17.495 14:29:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:17.495 14:29:09 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:27:17.495 14:29:09 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:27:17.495 14:29:09 -- common/autotest_common.sh@1336 -- # break 00:27:17.495 14:29:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:17.495 14:29:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:27:17.495 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:27:17.495 fio-3.35 00:27:17.495 Starting 1 thread 00:27:29.699 00:27:29.699 job_raid5f: (groupid=0, jobs=1): err= 0: pid=149823: Mon Nov 18 14:29:20 2024 00:27:29.699 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec) 00:27:29.699 slat (usec): min=19, max=112, avg=21.42, stdev= 2.66 00:27:29.699 clat (usec): min=13, max=379, avg=139.56, stdev=51.56 00:27:29.700 lat (usec): min=37, max=418, avg=160.98, stdev=52.46 00:27:29.700 clat percentiles (usec): 00:27:29.700 | 50.000th=[ 147], 99.000th=[ 237], 99.900th=[ 343], 99.990th=[ 363], 00:27:29.700 | 99.999th=[ 375] 00:27:29.700 write: IOPS=12.1k, BW=47.3MiB/s (49.6MB/s)(466MiB/9866msec); 0 zone resets 00:27:29.700 slat (usec): min=9, max=474, avg=18.22, stdev= 4.15 00:27:29.700 clat (usec): min=61, max=1548, avg=311.49, stdev=50.48 00:27:29.700 lat (usec): min=79, max=1568, avg=329.71, stdev=52.33 00:27:29.700 clat percentiles (usec): 00:27:29.700 | 50.000th=[ 314], 99.000th=[ 498], 99.900th=[ 717], 99.990th=[ 1123], 00:27:29.700 | 99.999th=[ 1532] 00:27:29.700 bw ( KiB/s): min=41528, max=50760, per=98.66%, avg=47741.05, stdev=2210.92, samples=19 00:27:29.700 iops : min=10382, max=12690, avg=11935.26, stdev=552.73, samples=19 00:27:29.700 lat (usec) : 20=0.01%, 50=0.01%, 100=12.18%, 250=41.21%, 500=46.10% 00:27:29.700 lat (usec) : 750=0.45%, 1000=0.04% 00:27:29.700 lat (msec) : 2=0.01% 00:27:29.700 cpu : usr=99.53%, sys=0.43%, ctx=26, majf=0, minf=11265 00:27:29.700 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:29.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.700 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:29.700 issued rwts: total=115211,119358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:29.700 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:29.700 00:27:29.700 Run status group 0 (all jobs): 00:27:29.700 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec 00:27:29.700 WRITE: bw=47.3MiB/s (49.6MB/s), 47.3MiB/s-47.3MiB/s (49.6MB/s-49.6MB/s), io=466MiB (489MB), run=9866-9866msec 00:27:29.700 ----------------------------------------------------- 00:27:29.700 Suppressions used: 00:27:29.700 count bytes template 00:27:29.700 1 7 /usr/src/fio/parse.c 00:27:29.700 582 55872 /usr/src/fio/iolog.c 00:27:29.700 1 904 libcrypto.so 00:27:29.700 ----------------------------------------------------- 00:27:29.700 00:27:29.700 00:27:29.700 real 0m11.230s 00:27:29.700 user 0m11.739s 00:27:29.700 sys 0m0.646s 00:27:29.700 14:29:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.700 14:29:20 -- common/autotest_common.sh@10 -- # set +x 00:27:29.700 ************************************ 00:27:29.700 END TEST bdev_fio_rw_verify 00:27:29.700 ************************************ 00:27:29.700 14:29:20 -- bdev/blockdev.sh@348 -- # rm -f 00:27:29.700 14:29:20 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:29.700 14:29:20 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:29.700 14:29:20 -- common/autotest_common.sh@1270 -- # local workload=trim 00:27:29.700 14:29:20 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:27:29.700 14:29:20 -- common/autotest_common.sh@1272 -- # local env_context= 00:27:29.700 14:29:20 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:27:29.700 14:29:20 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:29.700 14:29:20 -- common/autotest_common.sh@1290 -- # cat 00:27:29.700 14:29:20 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:27:29.700 14:29:20 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:27:29.700 14:29:20 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "4b804d29-6316-4c2d-b686-59fde19da996"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4b804d29-6316-4c2d-b686-59fde19da996",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "4b804d29-6316-4c2d-b686-59fde19da996",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c0828199-64a7-4263-953b-66983bfc08b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e18ec419-56bc-409d-9c60-8640c0cc7129",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b187a796-0519-4253-a638-3b8ee1f8fb32",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:27:29.700 14:29:20 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:27:29.700 14:29:20 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:27:29.700 /home/vagrant/spdk_repo/spdk 00:27:29.700 14:29:20 -- bdev/blockdev.sh@360 -- # popd 00:27:29.700 14:29:20 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:27:29.700 14:29:20 -- bdev/blockdev.sh@362 -- # return 0 00:27:29.700 00:27:29.700 real 0m11.405s 00:27:29.700 user 0m11.847s 00:27:29.700 sys 0m0.715s 00:27:29.700 14:29:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:29.700 14:29:20 -- common/autotest_common.sh@10 -- # set +x 00:27:29.700 ************************************ 00:27:29.700 END TEST bdev_fio 00:27:29.700 ************************************ 00:27:29.700 14:29:20 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:29.700 14:29:20 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:29.700 14:29:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:29.700 14:29:20 -- common/autotest_common.sh@10 -- # set +x 00:27:29.700 ************************************ 00:27:29.700 START TEST bdev_verify 00:27:29.700 ************************************ 00:27:29.700 14:29:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:29.700 [2024-11-18 14:29:20.780931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:29.700 [2024-11-18 14:29:20.781099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149993 ] 00:27:29.700 [2024-11-18 14:29:20.918949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:29.700 [2024-11-18 14:29:20.981899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.700 [2024-11-18 14:29:20.981900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.700 Running I/O for 5 seconds... 00:27:34.975 00:27:34.975 Latency(us) 00:27:34.975 [2024-11-18T14:29:27.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.975 [2024-11-18T14:29:27.049Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:34.975 Verification LBA range: start 0x0 length 0x2000 00:27:34.975 raid5f : 5.01 8241.06 32.19 0.00 0.00 24630.60 130.33 20375.74 00:27:34.975 [2024-11-18T14:29:27.049Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:34.975 Verification LBA range: start 0x2000 length 0x2000 00:27:34.975 raid5f : 5.01 8653.41 33.80 0.00 0.00 23451.91 294.17 19541.64 00:27:34.975 [2024-11-18T14:29:27.049Z] =================================================================================================================== 00:27:34.975 [2024-11-18T14:29:27.049Z] Total : 16894.47 65.99 0.00 0.00 24026.93 130.33 20375.74 00:27:34.975 00:27:34.975 real 0m5.719s 00:27:34.975 user 0m10.780s 00:27:34.975 sys 0m0.200s 00:27:34.975 14:29:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:34.975 ************************************ 00:27:34.975 END TEST bdev_verify 00:27:34.975 ************************************ 00:27:34.975 14:29:26 -- common/autotest_common.sh@10 -- # set +x 00:27:34.975 14:29:26 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:34.975 14:29:26 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:34.975 14:29:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:34.975 14:29:26 -- common/autotest_common.sh@10 -- # set +x 00:27:34.975 ************************************ 00:27:34.975 START TEST bdev_verify_big_io 00:27:34.975 ************************************ 00:27:34.975 14:29:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:34.975 [2024-11-18 14:29:26.556267] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:34.975 [2024-11-18 14:29:26.556637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150082 ] 00:27:34.975 [2024-11-18 14:29:26.696664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:34.975 [2024-11-18 14:29:26.763799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.975 [2024-11-18 14:29:26.763844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.975 Running I/O for 5 seconds... 00:27:40.292 00:27:40.292 Latency(us) 00:27:40.292 [2024-11-18T14:29:32.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.292 [2024-11-18T14:29:32.366Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:40.292 Verification LBA range: start 0x0 length 0x200 00:27:40.292 raid5f : 5.17 617.96 38.62 0.00 0.00 5405481.90 185.25 163005.91 00:27:40.292 [2024-11-18T14:29:32.366Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:40.292 Verification LBA range: start 0x200 length 0x200 00:27:40.292 raid5f : 5.16 647.95 40.50 0.00 0.00 5166588.84 148.01 160146.15 00:27:40.292 [2024-11-18T14:29:32.366Z] =================================================================================================================== 00:27:40.292 [2024-11-18T14:29:32.366Z] Total : 1265.92 79.12 0.00 0.00 5283330.64 148.01 163005.91 00:27:40.561 ************************************ 00:27:40.561 END TEST bdev_verify_big_io 00:27:40.561 ************************************ 00:27:40.561 00:27:40.561 real 0m5.891s 00:27:40.561 user 0m11.093s 00:27:40.561 sys 0m0.208s 00:27:40.561 14:29:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:40.561 14:29:32 -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 14:29:32 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.561 14:29:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:40.561 14:29:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:40.561 14:29:32 -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 ************************************ 00:27:40.561 START TEST bdev_write_zeroes 00:27:40.561 ************************************ 00:27:40.561 14:29:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.561 [2024-11-18 14:29:32.505730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:40.561 [2024-11-18 14:29:32.506195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150178 ] 00:27:40.821 [2024-11-18 14:29:32.655898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.821 [2024-11-18 14:29:32.722693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.080 Running I/O for 1 seconds... 00:27:42.018 00:27:42.018 Latency(us) 00:27:42.018 [2024-11-18T14:29:34.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.018 [2024-11-18T14:29:34.092Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:42.018 raid5f : 1.00 28007.97 109.41 0.00 0.00 4558.03 1385.19 6196.13 00:27:42.018 [2024-11-18T14:29:34.092Z] =================================================================================================================== 00:27:42.018 [2024-11-18T14:29:34.092Z] Total : 28007.97 109.41 0.00 0.00 4558.03 1385.19 6196.13 00:27:42.277 ************************************ 00:27:42.278 END TEST bdev_write_zeroes 00:27:42.278 ************************************ 00:27:42.278 00:27:42.278 real 0m1.731s 00:27:42.278 user 0m1.412s 00:27:42.278 sys 0m0.200s 00:27:42.278 14:29:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:42.278 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 14:29:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.278 14:29:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:42.278 14:29:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:42.278 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:27:42.278 ************************************ 00:27:42.278 START TEST bdev_json_nonenclosed 00:27:42.278 ************************************ 00:27:42.278 14:29:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.278 [2024-11-18 14:29:34.302753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:42.278 [2024-11-18 14:29:34.303341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150223 ] 00:27:42.537 [2024-11-18 14:29:34.451802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.537 [2024-11-18 14:29:34.527967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.537 [2024-11-18 14:29:34.528382] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:42.537 [2024-11-18 14:29:34.528551] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:42.796 00:27:42.796 real 0m0.395s 00:27:42.796 user 0m0.188s 00:27:42.796 sys 0m0.106s 00:27:42.796 14:29:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:42.796 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:27:42.796 ************************************ 00:27:42.796 END TEST bdev_json_nonenclosed 00:27:42.796 ************************************ 00:27:42.796 14:29:34 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.796 14:29:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:42.796 14:29:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:42.796 14:29:34 -- common/autotest_common.sh@10 -- # set +x 00:27:42.796 ************************************ 00:27:42.796 START TEST bdev_json_nonarray 00:27:42.796 ************************************ 00:27:42.796 14:29:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:42.796 [2024-11-18 14:29:34.749457] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:42.796 [2024-11-18 14:29:34.749865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150252 ] 00:27:43.056 [2024-11-18 14:29:34.896200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.056 [2024-11-18 14:29:34.975343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.056 [2024-11-18 14:29:34.975885] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:43.056 [2024-11-18 14:29:34.976063] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:43.056 00:27:43.056 real 0m0.370s 00:27:43.056 user 0m0.153s 00:27:43.056 sys 0m0.117s 00:27:43.056 ************************************ 00:27:43.056 END TEST bdev_json_nonarray 00:27:43.056 ************************************ 00:27:43.056 14:29:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:43.056 14:29:35 -- common/autotest_common.sh@10 -- # set +x 00:27:43.056 14:29:35 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:27:43.056 14:29:35 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:27:43.056 14:29:35 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:27:43.056 14:29:35 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:27:43.056 14:29:35 -- bdev/blockdev.sh@809 -- # cleanup 00:27:43.056 14:29:35 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:43.056 14:29:35 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:43.056 14:29:35 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:27:43.056 14:29:35 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:27:43.056 14:29:35 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:27:43.056 14:29:35 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:27:43.056 ************************************ 00:27:43.056 END TEST blockdev_raid5f 00:27:43.056 ************************************ 00:27:43.056 00:27:43.056 real 0m34.782s 00:27:43.056 user 0m48.958s 00:27:43.056 sys 0m3.870s 00:27:43.056 14:29:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:43.056 14:29:35 -- common/autotest_common.sh@10 -- # set +x 00:27:43.315 14:29:35 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:43.315 14:29:35 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:43.315 14:29:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.315 14:29:35 -- common/autotest_common.sh@10 -- # set +x 00:27:43.315 14:29:35 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:43.315 14:29:35 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:43.315 14:29:35 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:43.315 14:29:35 -- common/autotest_common.sh@10 -- # set +x 00:27:44.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:44.949 Waiting for block devices as requested 00:27:44.949 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:45.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:27:45.514 Cleaning 00:27:45.514 Removing: /var/run/dpdk/spdk0/config 00:27:45.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:45.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:45.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:45.514 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:45.514 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:45.514 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:45.514 Removing: /dev/shm/spdk_tgt_trace.pid114554 00:27:45.514 Removing: /var/run/dpdk/spdk0 00:27:45.514 Removing: /var/run/dpdk/spdk_pid114354 00:27:45.514 Removing: /var/run/dpdk/spdk_pid114554 00:27:45.514 Removing: /var/run/dpdk/spdk_pid114840 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115093 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115273 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115361 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115454 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115562 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115658 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115710 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115751 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115832 00:27:45.514 Removing: /var/run/dpdk/spdk_pid115941 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116455 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116516 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116576 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116599 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116680 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116701 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116775 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116796 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116844 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116867 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116925 00:27:45.514 Removing: /var/run/dpdk/spdk_pid116942 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117094 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117139 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117183 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117270 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117333 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117372 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117454 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117477 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117522 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117545 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117592 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117615 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117660 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117690 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117728 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117758 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117798 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117828 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117874 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117896 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117936 00:27:45.514 Removing: /var/run/dpdk/spdk_pid117964 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118004 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118034 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118074 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118107 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118140 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118175 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118208 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118245 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118278 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118313 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118353 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118380 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118421 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118450 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118491 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118518 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118559 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118592 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118633 00:27:45.514 Removing: /var/run/dpdk/spdk_pid118666 00:27:45.772 Removing: /var/run/dpdk/spdk_pid118709 00:27:45.772 Removing: /var/run/dpdk/spdk_pid118739 00:27:45.772 Removing: /var/run/dpdk/spdk_pid118777 00:27:45.772 Removing: /var/run/dpdk/spdk_pid118807 00:27:45.772 Removing: /var/run/dpdk/spdk_pid118854 00:27:45.772 Removing: /var/run/dpdk/spdk_pid118942 00:27:45.772 Removing: /var/run/dpdk/spdk_pid119058 00:27:45.772 Removing: /var/run/dpdk/spdk_pid119225 00:27:45.772 Removing: /var/run/dpdk/spdk_pid119286 00:27:45.772 Removing: /var/run/dpdk/spdk_pid119329 00:27:45.772 Removing: /var/run/dpdk/spdk_pid120501 00:27:45.772 Removing: /var/run/dpdk/spdk_pid120699 00:27:45.772 Removing: /var/run/dpdk/spdk_pid120893 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121000 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121106 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121158 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121196 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121218 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121688 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121772 00:27:45.772 Removing: /var/run/dpdk/spdk_pid121873 00:27:45.773 Removing: /var/run/dpdk/spdk_pid121926 00:27:45.773 Removing: /var/run/dpdk/spdk_pid123073 00:27:45.773 Removing: /var/run/dpdk/spdk_pid123937 00:27:45.773 Removing: /var/run/dpdk/spdk_pid124809 00:27:45.773 Removing: /var/run/dpdk/spdk_pid125891 00:27:45.773 Removing: /var/run/dpdk/spdk_pid126927 00:27:45.773 Removing: /var/run/dpdk/spdk_pid127965 00:27:45.773 Removing: /var/run/dpdk/spdk_pid129405 00:27:45.773 Removing: /var/run/dpdk/spdk_pid130584 00:27:45.773 Removing: /var/run/dpdk/spdk_pid131750 00:27:45.773 Removing: /var/run/dpdk/spdk_pid132402 00:27:45.773 Removing: /var/run/dpdk/spdk_pid132929 00:27:45.773 Removing: /var/run/dpdk/spdk_pid133537 00:27:45.773 Removing: /var/run/dpdk/spdk_pid134012 00:27:45.773 Removing: /var/run/dpdk/spdk_pid134546 00:27:45.773 Removing: /var/run/dpdk/spdk_pid135077 00:27:45.773 Removing: /var/run/dpdk/spdk_pid135717 00:27:45.773 Removing: /var/run/dpdk/spdk_pid136214 00:27:45.773 Removing: /var/run/dpdk/spdk_pid137550 00:27:45.773 Removing: /var/run/dpdk/spdk_pid138136 00:27:45.773 Removing: /var/run/dpdk/spdk_pid138665 00:27:45.773 Removing: /var/run/dpdk/spdk_pid140136 00:27:45.773 Removing: /var/run/dpdk/spdk_pid140785 00:27:45.773 Removing: /var/run/dpdk/spdk_pid141387 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142139 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142182 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142221 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142274 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142406 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142553 00:27:45.773 Removing: /var/run/dpdk/spdk_pid142782 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143078 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143093 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143134 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143158 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143178 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143194 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143207 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143227 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143247 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143263 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143284 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143304 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143319 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143335 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143355 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143370 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143391 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143411 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143426 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143440 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143482 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143499 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143527 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143607 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143642 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143662 00:27:45.773 Removing: /var/run/dpdk/spdk_pid143691 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143711 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143715 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143767 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143786 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143816 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143831 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143847 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143853 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143869 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143875 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143892 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143904 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143935 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143969 00:27:46.031 Removing: /var/run/dpdk/spdk_pid143992 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144018 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144039 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144053 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144105 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144118 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144149 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144170 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144186 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144192 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144209 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144221 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144234 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144250 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144336 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144398 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144518 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144541 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144588 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144646 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144672 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144689 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144711 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144748 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144770 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144859 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144897 00:27:46.031 Removing: /var/run/dpdk/spdk_pid144943 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145210 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145332 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145368 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145461 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145528 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145566 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145805 00:27:46.031 Removing: /var/run/dpdk/spdk_pid145921 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146017 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146055 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146086 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146168 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146586 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146617 00:27:46.031 Removing: /var/run/dpdk/spdk_pid146911 00:27:46.031 Removing: /var/run/dpdk/spdk_pid147031 00:27:46.031 Removing: /var/run/dpdk/spdk_pid147128 00:27:46.031 Removing: /var/run/dpdk/spdk_pid147172 00:27:46.031 Removing: /var/run/dpdk/spdk_pid147194 00:27:46.031 Removing: /var/run/dpdk/spdk_pid147223 00:27:46.031 Removing: /var/run/dpdk/spdk_pid148566 00:27:46.031 Removing: /var/run/dpdk/spdk_pid148688 00:27:46.031 Removing: /var/run/dpdk/spdk_pid148698 00:27:46.031 Removing: /var/run/dpdk/spdk_pid148716 00:27:46.031 Removing: /var/run/dpdk/spdk_pid149230 00:27:46.031 Removing: /var/run/dpdk/spdk_pid149324 00:27:46.032 Removing: /var/run/dpdk/spdk_pid149467 00:27:46.032 Removing: /var/run/dpdk/spdk_pid149510 00:27:46.032 Removing: /var/run/dpdk/spdk_pid149548 00:27:46.032 Removing: /var/run/dpdk/spdk_pid149810 00:27:46.032 Removing: /var/run/dpdk/spdk_pid149993 00:27:46.032 Removing: /var/run/dpdk/spdk_pid150082 00:27:46.032 Removing: /var/run/dpdk/spdk_pid150178 00:27:46.032 Removing: /var/run/dpdk/spdk_pid150223 00:27:46.032 Removing: /var/run/dpdk/spdk_pid150252 00:27:46.032 Clean 00:27:46.290 killing process with pid 103987 00:27:46.290 killing process with pid 103991 00:27:46.290 14:29:38 -- common/autotest_common.sh@1446 -- # return 0 00:27:46.290 14:29:38 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:46.290 14:29:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.290 14:29:38 -- common/autotest_common.sh@10 -- # set +x 00:27:46.290 14:29:38 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:46.290 14:29:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.290 14:29:38 -- common/autotest_common.sh@10 -- # set +x 00:27:46.290 14:29:38 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:46.290 14:29:38 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:46.290 14:29:38 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:46.290 14:29:38 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:46.290 14:29:38 -- spdk/autotest.sh@383 -- # hostname 00:27:46.290 14:29:38 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:46.548 geninfo: WARNING: invalid characters removed from testname! 00:28:25.264 14:30:16 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:29.456 14:30:20 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:31.991 14:30:23 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:34.527 14:30:26 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:37.061 14:30:29 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:40.364 14:30:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:42.899 14:30:34 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:42.900 14:30:34 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:28:42.900 14:30:34 -- common/autotest_common.sh@1690 -- $ lcov --version 00:28:42.900 14:30:34 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:28:42.900 14:30:34 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:28:42.900 14:30:34 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:28:42.900 14:30:34 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:28:42.900 14:30:34 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:28:42.900 14:30:34 -- scripts/common.sh@335 -- $ IFS=.-: 00:28:42.900 14:30:34 -- scripts/common.sh@335 -- $ read -ra ver1 00:28:42.900 14:30:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:42.900 14:30:34 -- scripts/common.sh@336 -- $ read -ra ver2 00:28:42.900 14:30:34 -- scripts/common.sh@337 -- $ local 'op=<' 00:28:42.900 14:30:34 -- scripts/common.sh@339 -- $ ver1_l=2 00:28:42.900 14:30:34 -- scripts/common.sh@340 -- $ ver2_l=1 00:28:42.900 14:30:34 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:28:42.900 14:30:34 -- scripts/common.sh@343 -- $ case "$op" in 00:28:42.900 14:30:34 -- scripts/common.sh@344 -- $ : 1 00:28:42.900 14:30:34 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:28:42.900 14:30:34 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.900 14:30:34 -- scripts/common.sh@364 -- $ decimal 1 00:28:42.900 14:30:34 -- scripts/common.sh@352 -- $ local d=1 00:28:42.900 14:30:34 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:42.900 14:30:34 -- scripts/common.sh@354 -- $ echo 1 00:28:42.900 14:30:34 -- scripts/common.sh@364 -- $ ver1[v]=1 00:28:42.900 14:30:34 -- scripts/common.sh@365 -- $ decimal 2 00:28:42.900 14:30:34 -- scripts/common.sh@352 -- $ local d=2 00:28:42.900 14:30:34 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:42.900 14:30:34 -- scripts/common.sh@354 -- $ echo 2 00:28:42.900 14:30:34 -- scripts/common.sh@365 -- $ ver2[v]=2 00:28:42.900 14:30:34 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:28:42.900 14:30:34 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:28:42.900 14:30:34 -- scripts/common.sh@367 -- $ return 0 00:28:42.900 14:30:34 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.900 14:30:34 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:28:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.900 --rc genhtml_branch_coverage=1 00:28:42.900 --rc genhtml_function_coverage=1 00:28:42.900 --rc genhtml_legend=1 00:28:42.900 --rc geninfo_all_blocks=1 00:28:42.900 --rc geninfo_unexecuted_blocks=1 00:28:42.900 00:28:42.900 ' 00:28:42.900 14:30:34 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:28:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.900 --rc genhtml_branch_coverage=1 00:28:42.900 --rc genhtml_function_coverage=1 00:28:42.900 --rc genhtml_legend=1 00:28:42.900 --rc geninfo_all_blocks=1 00:28:42.900 --rc geninfo_unexecuted_blocks=1 00:28:42.900 00:28:42.900 ' 00:28:42.900 14:30:34 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:28:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.900 --rc genhtml_branch_coverage=1 00:28:42.900 --rc genhtml_function_coverage=1 00:28:42.900 --rc genhtml_legend=1 00:28:42.900 --rc geninfo_all_blocks=1 00:28:42.900 --rc geninfo_unexecuted_blocks=1 00:28:42.900 00:28:42.900 ' 00:28:42.900 14:30:34 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:28:42.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.900 --rc genhtml_branch_coverage=1 00:28:42.900 --rc genhtml_function_coverage=1 00:28:42.900 --rc genhtml_legend=1 00:28:42.900 --rc geninfo_all_blocks=1 00:28:42.900 --rc geninfo_unexecuted_blocks=1 00:28:42.900 00:28:42.900 ' 00:28:42.900 14:30:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:42.900 14:30:34 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:42.900 14:30:34 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.900 14:30:34 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.900 14:30:34 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:42.900 14:30:34 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:42.900 14:30:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:42.900 14:30:34 -- paths/export.sh@5 -- $ export PATH 00:28:42.900 14:30:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:42.900 14:30:34 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:42.900 14:30:34 -- common/autobuild_common.sh@440 -- $ date +%s 00:28:42.900 14:30:34 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731940234.XXXXXX 00:28:42.900 14:30:34 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731940234.bYsqBp 00:28:42.900 14:30:34 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:28:42.900 14:30:34 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:28:42.900 14:30:34 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:28:42.900 14:30:34 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:28:42.900 14:30:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:42.900 14:30:34 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:42.900 14:30:34 -- common/autobuild_common.sh@456 -- $ get_config_params 00:28:42.900 14:30:34 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:42.900 14:30:34 -- common/autotest_common.sh@10 -- $ set +x 00:28:42.900 14:30:34 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:28:42.900 14:30:34 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:42.900 14:30:34 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:42.900 14:30:34 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:42.900 14:30:34 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:42.900 14:30:34 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:28:42.900 14:30:34 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:28:42.900 14:30:34 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:28:42.900 14:30:34 -- common/autotest_common.sh@10 -- $ set +x 00:28:42.900 14:30:34 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:28:42.900 14:30:34 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:28:42.900 14:30:34 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:28:42.900 14:30:34 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:28:42.900 14:30:34 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:42.900 14:30:34 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:42.900 14:30:34 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:28:42.900 14:30:34 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:28:42.900 14:30:34 -- spdk/autopackage.sh@40 -- $ get_config_params 00:28:42.900 14:30:34 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:28:42.900 14:30:34 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:28:42.901 14:30:34 -- common/autotest_common.sh@10 -- $ set +x 00:28:42.901 14:30:34 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:28:42.901 14:30:34 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:28:42.901 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:28:42.901 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:28:42.901 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:28:42.901 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:43.160 Using 'verbs' RDMA provider 00:28:55.997 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:29:08.205 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:29:08.205 Creating mk/config.mk...done. 00:29:08.205 Creating mk/cc.flags.mk...done. 00:29:08.205 Type 'make' to build. 00:29:08.205 14:30:58 -- spdk/autopackage.sh@43 -- $ make -j10 00:29:08.205 make[1]: Nothing to be done for 'all'. 00:29:08.205 CC lib/log/log.o 00:29:08.205 CC lib/log/log_flags.o 00:29:08.205 CC lib/log/log_deprecated.o 00:29:08.205 CC lib/ut/ut.o 00:29:08.205 CC lib/ut_mock/mock.o 00:29:08.205 LIB libspdk_ut_mock.a 00:29:08.205 LIB libspdk_log.a 00:29:08.205 LIB libspdk_ut.a 00:29:08.205 CC lib/dma/dma.o 00:29:08.205 CC lib/util/base64.o 00:29:08.205 CC lib/util/bit_array.o 00:29:08.205 CC lib/util/cpuset.o 00:29:08.205 CXX lib/trace_parser/trace.o 00:29:08.205 CC lib/util/crc16.o 00:29:08.205 CC lib/ioat/ioat.o 00:29:08.205 CC lib/util/crc32.o 00:29:08.205 CC lib/util/crc32c.o 00:29:08.205 CC lib/vfio_user/host/vfio_user_pci.o 00:29:08.205 CC lib/util/crc32_ieee.o 00:29:08.205 CC lib/util/crc64.o 00:29:08.205 CC lib/util/dif.o 00:29:08.205 CC lib/util/fd.o 00:29:08.205 LIB libspdk_dma.a 00:29:08.205 CC lib/vfio_user/host/vfio_user.o 00:29:08.205 CC lib/util/file.o 00:29:08.205 CC lib/util/hexlify.o 00:29:08.205 LIB libspdk_ioat.a 00:29:08.205 CC lib/util/iov.o 00:29:08.205 CC lib/util/math.o 00:29:08.205 CC lib/util/pipe.o 00:29:08.205 CC lib/util/strerror_tls.o 00:29:08.205 CC lib/util/string.o 00:29:08.205 CC lib/util/uuid.o 00:29:08.205 CC lib/util/fd_group.o 00:29:08.205 LIB libspdk_vfio_user.a 00:29:08.205 CC lib/util/xor.o 00:29:08.205 CC lib/util/zipf.o 00:29:08.205 LIB libspdk_util.a 00:29:08.205 LIB libspdk_trace_parser.a 00:29:08.205 CC lib/idxd/idxd.o 00:29:08.205 CC lib/idxd/idxd_user.o 00:29:08.205 CC lib/json/json_parse.o 00:29:08.205 CC lib/json/json_util.o 00:29:08.205 CC lib/vmd/vmd.o 00:29:08.205 CC lib/conf/conf.o 00:29:08.205 CC lib/json/json_write.o 00:29:08.205 CC lib/vmd/led.o 00:29:08.205 CC lib/rdma/common.o 00:29:08.205 CC lib/env_dpdk/env.o 00:29:08.205 CC lib/rdma/rdma_verbs.o 00:29:08.205 CC lib/env_dpdk/memory.o 00:29:08.205 CC lib/env_dpdk/pci.o 00:29:08.205 LIB libspdk_conf.a 00:29:08.205 CC lib/env_dpdk/init.o 00:29:08.205 LIB libspdk_json.a 00:29:08.205 CC lib/env_dpdk/threads.o 00:29:08.205 CC lib/env_dpdk/pci_ioat.o 00:29:08.205 LIB libspdk_rdma.a 00:29:08.205 LIB libspdk_idxd.a 00:29:08.205 CC lib/jsonrpc/jsonrpc_server.o 00:29:08.205 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:29:08.205 CC lib/jsonrpc/jsonrpc_client.o 00:29:08.205 CC lib/env_dpdk/pci_virtio.o 00:29:08.205 LIB libspdk_vmd.a 00:29:08.464 CC lib/env_dpdk/pci_vmd.o 00:29:08.464 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:29:08.464 CC lib/env_dpdk/pci_idxd.o 00:29:08.464 CC lib/env_dpdk/pci_event.o 00:29:08.464 CC lib/env_dpdk/sigbus_handler.o 00:29:08.464 CC lib/env_dpdk/pci_dpdk.o 00:29:08.464 CC lib/env_dpdk/pci_dpdk_2207.o 00:29:08.464 CC lib/env_dpdk/pci_dpdk_2211.o 00:29:08.464 LIB libspdk_jsonrpc.a 00:29:08.464 CC lib/rpc/rpc.o 00:29:08.723 LIB libspdk_rpc.a 00:29:08.723 CC lib/trace/trace_flags.o 00:29:08.723 CC lib/trace/trace.o 00:29:08.723 CC lib/trace/trace_rpc.o 00:29:08.723 CC lib/sock/sock.o 00:29:08.723 CC lib/sock/sock_rpc.o 00:29:08.723 CC lib/notify/notify.o 00:29:08.723 CC lib/notify/notify_rpc.o 00:29:08.982 LIB libspdk_env_dpdk.a 00:29:08.982 LIB libspdk_notify.a 00:29:08.982 LIB libspdk_trace.a 00:29:08.982 LIB libspdk_sock.a 00:29:08.982 CC lib/thread/thread.o 00:29:08.982 CC lib/thread/iobuf.o 00:29:09.241 CC lib/nvme/nvme_ctrlr_cmd.o 00:29:09.241 CC lib/nvme/nvme_ctrlr.o 00:29:09.241 CC lib/nvme/nvme_fabric.o 00:29:09.241 CC lib/nvme/nvme_ns_cmd.o 00:29:09.241 CC lib/nvme/nvme_ns.o 00:29:09.241 CC lib/nvme/nvme_pcie_common.o 00:29:09.241 CC lib/nvme/nvme_pcie.o 00:29:09.241 CC lib/nvme/nvme_qpair.o 00:29:09.241 CC lib/nvme/nvme.o 00:29:09.809 LIB libspdk_thread.a 00:29:09.809 CC lib/nvme/nvme_quirks.o 00:29:09.809 CC lib/accel/accel.o 00:29:09.809 CC lib/accel/accel_rpc.o 00:29:09.809 CC lib/blob/blobstore.o 00:29:09.809 CC lib/init/json_config.o 00:29:09.809 CC lib/accel/accel_sw.o 00:29:09.809 CC lib/virtio/virtio.o 00:29:09.809 CC lib/virtio/virtio_vhost_user.o 00:29:09.809 CC lib/nvme/nvme_transport.o 00:29:09.809 CC lib/nvme/nvme_discovery.o 00:29:09.809 CC lib/init/subsystem.o 00:29:09.809 CC lib/init/subsystem_rpc.o 00:29:10.068 CC lib/init/rpc.o 00:29:10.068 CC lib/virtio/virtio_vfio_user.o 00:29:10.068 CC lib/virtio/virtio_pci.o 00:29:10.068 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:29:10.068 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:29:10.068 CC lib/nvme/nvme_tcp.o 00:29:10.068 LIB libspdk_init.a 00:29:10.068 CC lib/nvme/nvme_opal.o 00:29:10.068 LIB libspdk_accel.a 00:29:10.068 CC lib/nvme/nvme_io_msg.o 00:29:10.068 CC lib/nvme/nvme_poll_group.o 00:29:10.068 LIB libspdk_virtio.a 00:29:10.327 CC lib/event/app.o 00:29:10.327 CC lib/event/reactor.o 00:29:10.327 CC lib/event/log_rpc.o 00:29:10.327 CC lib/event/app_rpc.o 00:29:10.327 CC lib/event/scheduler_static.o 00:29:10.586 CC lib/nvme/nvme_zns.o 00:29:10.586 CC lib/nvme/nvme_cuse.o 00:29:10.586 CC lib/blob/request.o 00:29:10.586 CC lib/blob/zeroes.o 00:29:10.586 CC lib/nvme/nvme_vfio_user.o 00:29:10.586 LIB libspdk_event.a 00:29:10.586 CC lib/blob/blob_bs_dev.o 00:29:10.586 CC lib/nvme/nvme_rdma.o 00:29:10.586 CC lib/bdev/bdev.o 00:29:10.586 CC lib/bdev/bdev_rpc.o 00:29:10.586 CC lib/bdev/bdev_zone.o 00:29:10.586 CC lib/bdev/part.o 00:29:10.845 CC lib/bdev/scsi_nvme.o 00:29:10.845 LIB libspdk_blob.a 00:29:11.104 CC lib/lvol/lvol.o 00:29:11.104 CC lib/blobfs/blobfs.o 00:29:11.104 CC lib/blobfs/tree.o 00:29:11.363 LIB libspdk_nvme.a 00:29:11.363 LIB libspdk_blobfs.a 00:29:11.363 LIB libspdk_lvol.a 00:29:11.622 LIB libspdk_bdev.a 00:29:11.881 CC lib/nvmf/ctrlr.o 00:29:11.881 CC lib/nvmf/ctrlr_bdev.o 00:29:11.881 CC lib/nvmf/ctrlr_discovery.o 00:29:11.881 CC lib/nbd/nbd.o 00:29:11.881 CC lib/nvmf/subsystem.o 00:29:11.881 CC lib/nvmf/nvmf.o 00:29:11.881 CC lib/nbd/nbd_rpc.o 00:29:11.881 CC lib/nvmf/nvmf_rpc.o 00:29:11.881 CC lib/scsi/dev.o 00:29:11.881 CC lib/ftl/ftl_core.o 00:29:11.881 CC lib/ftl/ftl_init.o 00:29:11.881 CC lib/scsi/lun.o 00:29:11.881 CC lib/scsi/port.o 00:29:12.140 CC lib/scsi/scsi.o 00:29:12.140 LIB libspdk_nbd.a 00:29:12.140 CC lib/scsi/scsi_bdev.o 00:29:12.140 CC lib/scsi/scsi_pr.o 00:29:12.140 CC lib/ftl/ftl_layout.o 00:29:12.140 CC lib/ftl/ftl_debug.o 00:29:12.140 CC lib/ftl/ftl_io.o 00:29:12.140 CC lib/ftl/ftl_sb.o 00:29:12.140 CC lib/ftl/ftl_l2p.o 00:29:12.140 CC lib/ftl/ftl_l2p_flat.o 00:29:12.140 CC lib/scsi/scsi_rpc.o 00:29:12.140 CC lib/scsi/task.o 00:29:12.140 CC lib/ftl/ftl_nv_cache.o 00:29:12.400 CC lib/ftl/ftl_band.o 00:29:12.400 CC lib/ftl/ftl_band_ops.o 00:29:12.400 CC lib/nvmf/transport.o 00:29:12.400 CC lib/ftl/ftl_writer.o 00:29:12.400 CC lib/nvmf/tcp.o 00:29:12.400 CC lib/ftl/ftl_rq.o 00:29:12.400 CC lib/ftl/ftl_reloc.o 00:29:12.400 CC lib/ftl/ftl_l2p_cache.o 00:29:12.400 LIB libspdk_scsi.a 00:29:12.400 CC lib/ftl/ftl_p2l.o 00:29:12.400 CC lib/ftl/mngt/ftl_mngt.o 00:29:12.400 CC lib/nvmf/rdma.o 00:29:12.400 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:29:12.400 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:29:12.400 CC lib/ftl/mngt/ftl_mngt_startup.o 00:29:12.659 CC lib/ftl/mngt/ftl_mngt_md.o 00:29:12.659 CC lib/ftl/mngt/ftl_mngt_misc.o 00:29:12.659 CC lib/iscsi/conn.o 00:29:12.659 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:29:12.659 CC lib/iscsi/init_grp.o 00:29:12.659 CC lib/iscsi/iscsi.o 00:29:12.659 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:29:12.659 CC lib/vhost/vhost.o 00:29:12.659 CC lib/ftl/mngt/ftl_mngt_band.o 00:29:12.918 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:29:12.918 CC lib/vhost/vhost_rpc.o 00:29:12.918 CC lib/vhost/vhost_scsi.o 00:29:12.918 CC lib/iscsi/md5.o 00:29:12.918 CC lib/iscsi/param.o 00:29:12.918 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:29:12.918 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:29:12.918 CC lib/iscsi/portal_grp.o 00:29:12.918 CC lib/iscsi/tgt_node.o 00:29:13.333 CC lib/iscsi/iscsi_subsystem.o 00:29:13.333 CC lib/iscsi/iscsi_rpc.o 00:29:13.333 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:29:13.333 LIB libspdk_nvmf.a 00:29:13.333 CC lib/iscsi/task.o 00:29:13.333 CC lib/vhost/vhost_blk.o 00:29:13.333 CC lib/ftl/utils/ftl_conf.o 00:29:13.333 CC lib/ftl/utils/ftl_md.o 00:29:13.333 CC lib/ftl/utils/ftl_mempool.o 00:29:13.333 CC lib/ftl/utils/ftl_bitmap.o 00:29:13.333 CC lib/ftl/utils/ftl_property.o 00:29:13.333 CC lib/vhost/rte_vhost_user.o 00:29:13.333 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:29:13.333 LIB libspdk_iscsi.a 00:29:13.333 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:29:13.333 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:29:13.333 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:29:13.333 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:29:13.333 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:29:13.593 CC lib/ftl/upgrade/ftl_sb_v3.o 00:29:13.593 CC lib/ftl/upgrade/ftl_sb_v5.o 00:29:13.593 CC lib/ftl/nvc/ftl_nvc_dev.o 00:29:13.593 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:29:13.593 CC lib/ftl/base/ftl_base_dev.o 00:29:13.593 CC lib/ftl/base/ftl_base_bdev.o 00:29:13.593 LIB libspdk_ftl.a 00:29:13.852 LIB libspdk_vhost.a 00:29:14.111 CC module/env_dpdk/env_dpdk_rpc.o 00:29:14.111 CC module/accel/ioat/accel_ioat.o 00:29:14.111 CC module/scheduler/gscheduler/gscheduler.o 00:29:14.111 CC module/sock/posix/posix.o 00:29:14.111 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:29:14.111 CC module/accel/dsa/accel_dsa.o 00:29:14.111 CC module/blob/bdev/blob_bdev.o 00:29:14.111 CC module/accel/iaa/accel_iaa.o 00:29:14.111 CC module/accel/error/accel_error.o 00:29:14.111 CC module/scheduler/dynamic/scheduler_dynamic.o 00:29:14.111 LIB libspdk_env_dpdk_rpc.a 00:29:14.111 CC module/accel/error/accel_error_rpc.o 00:29:14.111 LIB libspdk_scheduler_gscheduler.a 00:29:14.111 LIB libspdk_scheduler_dpdk_governor.a 00:29:14.111 CC module/accel/ioat/accel_ioat_rpc.o 00:29:14.111 CC module/accel/iaa/accel_iaa_rpc.o 00:29:14.111 CC module/accel/dsa/accel_dsa_rpc.o 00:29:14.111 LIB libspdk_blob_bdev.a 00:29:14.111 LIB libspdk_scheduler_dynamic.a 00:29:14.370 LIB libspdk_accel_error.a 00:29:14.370 LIB libspdk_accel_iaa.a 00:29:14.370 LIB libspdk_accel_ioat.a 00:29:14.370 LIB libspdk_accel_dsa.a 00:29:14.370 CC module/bdev/gpt/gpt.o 00:29:14.370 CC module/bdev/delay/vbdev_delay.o 00:29:14.370 CC module/bdev/error/vbdev_error.o 00:29:14.370 CC module/bdev/lvol/vbdev_lvol.o 00:29:14.370 CC module/bdev/malloc/bdev_malloc.o 00:29:14.370 CC module/blobfs/bdev/blobfs_bdev.o 00:29:14.370 CC module/bdev/null/bdev_null.o 00:29:14.370 CC module/bdev/nvme/bdev_nvme.o 00:29:14.370 CC module/bdev/passthru/vbdev_passthru.o 00:29:14.370 LIB libspdk_sock_posix.a 00:29:14.370 CC module/bdev/nvme/bdev_nvme_rpc.o 00:29:14.370 CC module/bdev/gpt/vbdev_gpt.o 00:29:14.629 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:29:14.629 CC module/bdev/error/vbdev_error_rpc.o 00:29:14.629 CC module/bdev/null/bdev_null_rpc.o 00:29:14.629 CC module/bdev/delay/vbdev_delay_rpc.o 00:29:14.629 CC module/bdev/malloc/bdev_malloc_rpc.o 00:29:14.629 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:29:14.629 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:29:14.629 LIB libspdk_bdev_gpt.a 00:29:14.629 LIB libspdk_bdev_error.a 00:29:14.629 LIB libspdk_blobfs_bdev.a 00:29:14.629 CC module/bdev/nvme/nvme_rpc.o 00:29:14.629 CC module/bdev/nvme/bdev_mdns_client.o 00:29:14.629 LIB libspdk_bdev_null.a 00:29:14.629 LIB libspdk_bdev_malloc.a 00:29:14.629 LIB libspdk_bdev_delay.a 00:29:14.629 CC module/bdev/raid/bdev_raid.o 00:29:14.629 LIB libspdk_bdev_passthru.a 00:29:14.887 CC module/bdev/raid/bdev_raid_rpc.o 00:29:14.887 CC module/bdev/split/vbdev_split.o 00:29:14.887 CC module/bdev/aio/bdev_aio.o 00:29:14.887 CC module/bdev/zone_block/vbdev_zone_block.o 00:29:14.887 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:29:14.887 LIB libspdk_bdev_lvol.a 00:29:14.887 CC module/bdev/nvme/vbdev_opal.o 00:29:14.887 CC module/bdev/ftl/bdev_ftl.o 00:29:14.887 CC module/bdev/iscsi/bdev_iscsi.o 00:29:14.887 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:29:14.887 CC module/bdev/split/vbdev_split_rpc.o 00:29:14.887 CC module/bdev/nvme/vbdev_opal_rpc.o 00:29:14.887 CC module/bdev/aio/bdev_aio_rpc.o 00:29:14.887 LIB libspdk_bdev_zone_block.a 00:29:14.887 CC module/bdev/raid/bdev_raid_sb.o 00:29:15.146 CC module/bdev/ftl/bdev_ftl_rpc.o 00:29:15.146 CC module/bdev/raid/raid0.o 00:29:15.146 LIB libspdk_bdev_split.a 00:29:15.146 CC module/bdev/virtio/bdev_virtio_scsi.o 00:29:15.146 CC module/bdev/virtio/bdev_virtio_blk.o 00:29:15.146 CC module/bdev/raid/raid1.o 00:29:15.146 CC module/bdev/virtio/bdev_virtio_rpc.o 00:29:15.146 LIB libspdk_bdev_aio.a 00:29:15.146 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:29:15.146 LIB libspdk_bdev_iscsi.a 00:29:15.146 CC module/bdev/raid/concat.o 00:29:15.146 CC module/bdev/raid/raid5f.o 00:29:15.146 LIB libspdk_bdev_ftl.a 00:29:15.146 LIB libspdk_bdev_nvme.a 00:29:15.405 LIB libspdk_bdev_virtio.a 00:29:15.405 LIB libspdk_bdev_raid.a 00:29:15.663 CC module/event/subsystems/iobuf/iobuf.o 00:29:15.663 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:29:15.663 CC module/event/subsystems/sock/sock.o 00:29:15.663 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:29:15.663 CC module/event/subsystems/scheduler/scheduler.o 00:29:15.663 CC module/event/subsystems/vmd/vmd.o 00:29:15.663 CC module/event/subsystems/vmd/vmd_rpc.o 00:29:15.663 LIB libspdk_event_sock.a 00:29:15.663 LIB libspdk_event_vhost_blk.a 00:29:15.663 LIB libspdk_event_scheduler.a 00:29:15.663 LIB libspdk_event_vmd.a 00:29:15.663 LIB libspdk_event_iobuf.a 00:29:15.922 CC module/event/subsystems/accel/accel.o 00:29:15.922 LIB libspdk_event_accel.a 00:29:16.181 CC module/event/subsystems/bdev/bdev.o 00:29:16.181 LIB libspdk_event_bdev.a 00:29:16.440 CC module/event/subsystems/nbd/nbd.o 00:29:16.440 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:29:16.440 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:29:16.440 CC module/event/subsystems/scsi/scsi.o 00:29:16.440 LIB libspdk_event_nbd.a 00:29:16.440 LIB libspdk_event_nvmf.a 00:29:16.440 LIB libspdk_event_scsi.a 00:29:16.699 CC module/event/subsystems/iscsi/iscsi.o 00:29:16.699 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:29:16.699 LIB libspdk_event_vhost_scsi.a 00:29:16.958 LIB libspdk_event_iscsi.a 00:29:16.958 CXX app/trace/trace.o 00:29:16.958 TEST_HEADER include/spdk/config.h 00:29:16.958 CXX test/cpp_headers/accel.o 00:29:16.958 CC examples/accel/perf/accel_perf.o 00:29:16.958 CC test/bdev/bdevio/bdevio.o 00:29:16.958 CC test/accel/dif/dif.o 00:29:16.958 CC test/dma/test_dma/test_dma.o 00:29:16.958 CC examples/bdev/hello_world/hello_bdev.o 00:29:16.958 CC examples/blob/hello_world/hello_blob.o 00:29:16.958 CC test/app/bdev_svc/bdev_svc.o 00:29:16.958 CC test/blobfs/mkfs/mkfs.o 00:29:17.217 CXX test/cpp_headers/accel_module.o 00:29:17.217 LINK hello_bdev 00:29:17.217 LINK bdev_svc 00:29:17.217 LINK hello_blob 00:29:17.217 LINK mkfs 00:29:17.217 LINK accel_perf 00:29:17.475 CXX test/cpp_headers/assert.o 00:29:17.475 LINK test_dma 00:29:17.475 LINK spdk_trace 00:29:17.475 LINK dif 00:29:17.475 LINK bdevio 00:29:17.475 CXX test/cpp_headers/barrier.o 00:29:17.734 CXX test/cpp_headers/base64.o 00:29:17.993 CXX test/cpp_headers/bdev.o 00:29:18.560 CXX test/cpp_headers/bdev_module.o 00:29:18.819 CXX test/cpp_headers/bdev_zone.o 00:29:19.388 CXX test/cpp_headers/bit_array.o 00:29:19.647 CXX test/cpp_headers/bit_pool.o 00:29:19.906 CXX test/cpp_headers/blob.o 00:29:20.165 CXX test/cpp_headers/blob_bdev.o 00:29:21.101 CXX test/cpp_headers/blobfs.o 00:29:21.101 CXX test/cpp_headers/blobfs_bdev.o 00:29:22.479 CXX test/cpp_headers/conf.o 00:29:23.416 CC app/trace_record/trace_record.o 00:29:23.675 CXX test/cpp_headers/config.o 00:29:23.934 CXX test/cpp_headers/cpuset.o 00:29:25.312 LINK spdk_trace_record 00:29:25.312 CXX test/cpp_headers/crc16.o 00:29:26.690 CXX test/cpp_headers/crc32.o 00:29:28.077 CXX test/cpp_headers/crc64.o 00:29:29.455 CXX test/cpp_headers/dif.o 00:29:30.392 CXX test/cpp_headers/dma.o 00:29:31.771 CXX test/cpp_headers/endian.o 00:29:32.709 CXX test/cpp_headers/env.o 00:29:34.087 CXX test/cpp_headers/env_dpdk.o 00:29:34.346 CC examples/blob/cli/blobcli.o 00:29:35.283 CXX test/cpp_headers/event.o 00:29:35.283 CC app/nvmf_tgt/nvmf_main.o 00:29:36.220 CXX test/cpp_headers/fd.o 00:29:36.221 LINK nvmf_tgt 00:29:36.487 LINK blobcli 00:29:37.426 CXX test/cpp_headers/fd_group.o 00:29:38.363 CXX test/cpp_headers/file.o 00:29:39.741 CXX test/cpp_headers/ftl.o 00:29:41.118 CXX test/cpp_headers/gpt_spec.o 00:29:42.055 CXX test/cpp_headers/hexlify.o 00:29:42.992 CXX test/cpp_headers/histogram_data.o 00:29:44.371 CXX test/cpp_headers/idxd.o 00:29:45.308 CXX test/cpp_headers/idxd_spec.o 00:29:46.245 CXX test/cpp_headers/init.o 00:29:47.624 CXX test/cpp_headers/ioat.o 00:29:48.561 CXX test/cpp_headers/ioat_spec.o 00:29:49.939 CXX test/cpp_headers/iscsi_spec.o 00:29:50.878 CXX test/cpp_headers/json.o 00:29:52.257 CXX test/cpp_headers/jsonrpc.o 00:29:53.195 CXX test/cpp_headers/likely.o 00:29:54.574 CXX test/cpp_headers/log.o 00:29:56.481 CXX test/cpp_headers/lvol.o 00:29:57.923 CXX test/cpp_headers/memory.o 00:29:59.300 CXX test/cpp_headers/mmio.o 00:30:00.679 CXX test/cpp_headers/nbd.o 00:30:00.938 CXX test/cpp_headers/notify.o 00:30:02.844 CXX test/cpp_headers/nvme.o 00:30:04.750 CXX test/cpp_headers/nvme_intel.o 00:30:06.128 CXX test/cpp_headers/nvme_ocssd.o 00:30:08.032 CXX test/cpp_headers/nvme_ocssd_spec.o 00:30:09.409 CXX test/cpp_headers/nvme_spec.o 00:30:11.312 CXX test/cpp_headers/nvme_zns.o 00:30:12.720 CXX test/cpp_headers/nvmf.o 00:30:14.621 CXX test/cpp_headers/nvmf_cmd.o 00:30:16.526 CXX test/cpp_headers/nvmf_fc_spec.o 00:30:18.431 CXX test/cpp_headers/nvmf_spec.o 00:30:19.810 CXX test/cpp_headers/nvmf_transport.o 00:30:21.717 CXX test/cpp_headers/opal.o 00:30:23.622 CXX test/cpp_headers/opal_spec.o 00:30:25.001 CXX test/cpp_headers/pci_ids.o 00:30:26.380 CXX test/cpp_headers/pipe.o 00:30:27.786 CXX test/cpp_headers/queue.o 00:30:27.786 CXX test/cpp_headers/reduce.o 00:30:29.689 CXX test/cpp_headers/rpc.o 00:30:31.068 CXX test/cpp_headers/scheduler.o 00:30:32.975 CXX test/cpp_headers/scsi.o 00:30:34.883 CXX test/cpp_headers/scsi_spec.o 00:30:36.261 CXX test/cpp_headers/sock.o 00:30:37.652 CXX test/cpp_headers/stdinc.o 00:30:39.031 CXX test/cpp_headers/string.o 00:30:40.410 CXX test/cpp_headers/thread.o 00:30:42.317 CXX test/cpp_headers/trace.o 00:30:43.696 CXX test/cpp_headers/trace_parser.o 00:30:45.074 CXX test/cpp_headers/tree.o 00:30:45.643 CXX test/cpp_headers/ublk.o 00:30:47.023 CXX test/cpp_headers/util.o 00:30:48.929 CXX test/cpp_headers/uuid.o 00:30:50.307 CXX test/cpp_headers/version.o 00:30:50.307 CXX test/cpp_headers/vfio_user_pci.o 00:30:52.214 CXX test/cpp_headers/vfio_user_spec.o 00:30:53.596 CXX test/cpp_headers/vhost.o 00:30:54.533 CXX test/cpp_headers/vmd.o 00:30:55.911 CXX test/cpp_headers/xor.o 00:30:57.297 CXX test/cpp_headers/zipf.o 00:31:00.589 CC test/env/mem_callbacks/mem_callbacks.o 00:31:01.527 LINK mem_callbacks 00:31:08.123 CC test/env/vtophys/vtophys.o 00:31:09.090 LINK vtophys 00:31:15.657 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:31:15.917 CC test/env/memory/memory_ut.o 00:31:16.175 CC test/env/pci/pci_ut.o 00:31:16.175 LINK env_dpdk_post_init 00:31:16.175 CC app/iscsi_tgt/iscsi_tgt.o 00:31:16.434 LINK memory_ut 00:31:16.692 LINK pci_ut 00:31:16.692 LINK iscsi_tgt 00:31:16.951 CC app/spdk_tgt/spdk_tgt.o 00:31:17.210 CC app/spdk_lspci/spdk_lspci.o 00:31:17.468 LINK spdk_lspci 00:31:17.468 LINK spdk_tgt 00:31:17.726 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:31:18.294 CC examples/bdev/bdevperf/bdevperf.o 00:31:18.862 LINK nvme_fuzz 00:31:19.799 LINK bdevperf 00:31:20.366 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:31:21.744 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:31:22.003 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:31:23.380 LINK vhost_fuzz 00:31:23.639 LINK iscsi_fuzz 00:31:33.621 CC test/app/histogram_perf/histogram_perf.o 00:31:33.621 LINK histogram_perf 00:31:38.892 CC test/app/jsoncat/jsoncat.o 00:31:38.892 LINK jsoncat 00:31:42.178 CC app/spdk_nvme_perf/perf.o 00:31:44.082 CC app/spdk_nvme_identify/identify.o 00:31:45.019 LINK spdk_nvme_perf 00:31:46.392 LINK spdk_nvme_identify 00:31:48.945 CC app/spdk_nvme_discover/discovery_aer.o 00:31:49.548 LINK spdk_nvme_discover 00:31:49.548 CC test/app/stub/stub.o 00:31:50.484 LINK stub 00:31:51.856 CC test/event/event_perf/event_perf.o 00:31:52.130 LINK event_perf 00:31:52.412 CC test/lvol/esnap/esnap.o 00:31:57.697 CC test/nvme/aer/aer.o 00:31:59.601 LINK aer 00:32:09.580 LINK esnap 00:32:12.870 CC test/event/reactor/reactor.o 00:32:13.438 LINK reactor 00:32:25.642 CC test/event/reactor_perf/reactor_perf.o 00:32:25.642 CC test/event/app_repeat/app_repeat.o 00:32:25.642 LINK reactor_perf 00:32:25.642 LINK app_repeat 00:32:30.911 CC test/event/scheduler/scheduler.o 00:32:31.170 LINK scheduler 00:32:31.737 CC examples/ioat/perf/perf.o 00:32:32.673 CC examples/ioat/verify/verify.o 00:32:32.673 LINK ioat_perf 00:32:33.610 LINK verify 00:32:34.546 CC test/nvme/reset/reset.o 00:32:35.483 LINK reset 00:32:38.771 CC test/nvme/sgl/sgl.o 00:32:39.708 CC test/nvme/e2edp/nvme_dp.o 00:32:39.708 LINK sgl 00:32:40.644 LINK nvme_dp 00:32:44.834 CC test/nvme/overhead/overhead.o 00:32:44.834 CC examples/nvme/hello_world/hello_world.o 00:32:45.771 LINK overhead 00:32:46.035 LINK hello_world 00:33:04.188 CC test/nvme/err_injection/err_injection.o 00:33:04.188 LINK err_injection 00:33:10.758 CC test/nvme/startup/startup.o 00:33:11.326 CC test/nvme/reserve/reserve.o 00:33:11.586 LINK startup 00:33:12.154 LINK reserve 00:33:13.090 CC test/nvme/simple_copy/simple_copy.o 00:33:13.090 CC test/nvme/connect_stress/connect_stress.o 00:33:13.658 LINK connect_stress 00:33:13.658 LINK simple_copy 00:33:16.948 CC test/nvme/boot_partition/boot_partition.o 00:33:17.207 CC test/nvme/compliance/nvme_compliance.o 00:33:17.775 LINK boot_partition 00:33:19.153 LINK nvme_compliance 00:33:21.058 CC examples/nvme/reconnect/reconnect.o 00:33:22.962 LINK reconnect 00:33:37.849 CC test/nvme/fused_ordering/fused_ordering.o 00:33:37.849 LINK fused_ordering 00:33:42.042 CC test/rpc_client/rpc_client_test.o 00:33:42.301 LINK rpc_client_test 00:33:42.870 CC test/nvme/doorbell_aers/doorbell_aers.o 00:33:43.129 CC test/nvme/fdp/fdp.o 00:33:43.697 LINK doorbell_aers 00:33:43.956 CC test/thread/poller_perf/poller_perf.o 00:33:44.215 LINK fdp 00:33:44.475 LINK poller_perf 00:33:45.853 CC app/spdk_top/spdk_top.o 00:33:48.401 LINK spdk_top 00:33:50.357 CC test/thread/lock/spdk_lock.o 00:33:51.294 CC test/nvme/cuse/cuse.o 00:33:53.199 LINK spdk_lock 00:33:54.137 LINK cuse 00:33:54.396 CC examples/nvme/nvme_manage/nvme_manage.o 00:33:55.774 LINK nvme_manage 00:33:55.774 CC examples/nvme/arbitration/arbitration.o 00:33:57.152 LINK arbitration 00:33:58.089 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:33:58.348 LINK histogram_ut 00:33:59.726 CC test/unit/lib/accel/accel.c/accel_ut.o 00:33:59.985 CC app/vhost/vhost.o 00:34:00.553 LINK vhost 00:34:00.812 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:34:01.379 CC test/unit/lib/bdev/part.c/part_ut.o 00:34:03.281 LINK accel_ut 00:34:04.658 CC app/spdk_dd/spdk_dd.o 00:34:04.659 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:34:04.918 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:34:05.177 LINK spdk_dd 00:34:05.177 LINK scsi_nvme_ut 00:34:05.745 LINK part_ut 00:34:05.745 LINK gpt_ut 00:34:07.124 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:34:08.059 LINK bdev_ut 00:34:09.437 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:34:09.437 LINK vbdev_lvol_ut 00:34:10.814 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:34:11.749 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:34:12.743 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:34:12.743 LINK bdev_raid_sb_ut 00:34:13.001 LINK bdev_zone_ut 00:34:13.258 LINK bdev_raid_ut 00:34:15.158 CC examples/nvme/hotplug/hotplug.o 00:34:15.158 LINK bdev_ut 00:34:15.158 CC app/fio/nvme/fio_plugin.o 00:34:15.417 CC app/fio/bdev/fio_plugin.o 00:34:15.984 LINK hotplug 00:34:16.242 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:34:16.809 LINK spdk_bdev 00:34:16.809 LINK spdk_nvme 00:34:16.809 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:34:17.745 LINK concat_ut 00:34:18.683 LINK vbdev_zone_block_ut 00:34:25.281 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:34:25.848 LINK raid1_ut 00:34:26.107 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:34:26.675 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:34:29.218 LINK raid5f_ut 00:34:29.218 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:34:31.124 LINK blob_bdev_ut 00:34:36.397 LINK bdev_nvme_ut 00:34:36.397 CC test/unit/lib/blob/blob.c/blob_ut.o 00:34:39.686 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:34:39.686 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:34:39.945 LINK tree_ut 00:34:40.204 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:34:41.582 CC examples/nvme/cmb_copy/cmb_copy.o 00:34:42.519 LINK blobfs_async_ut 00:34:42.519 LINK cmb_copy 00:34:43.087 LINK blobfs_sync_ut 00:34:43.655 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:34:44.593 LINK blobfs_bdev_ut 00:34:49.869 CC test/unit/lib/dma/dma.c/dma_ut.o 00:34:50.437 LINK blob_ut 00:34:51.005 LINK dma_ut 00:34:52.383 CC test/unit/lib/event/app.c/app_ut.o 00:34:52.951 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:34:54.856 LINK app_ut 00:34:55.793 LINK reactor_ut 00:34:56.730 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:34:58.117 LINK ioat_ut 00:34:59.055 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:34:59.622 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:35:01.002 LINK init_grp_ut 00:35:01.571 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:35:02.139 LINK conn_ut 00:35:02.717 CC test/unit/lib/iscsi/param.c/param_ut.o 00:35:04.193 LINK param_ut 00:35:05.131 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:35:05.390 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:35:06.327 LINK portal_grp_ut 00:35:06.584 LINK iscsi_ut 00:35:07.149 LINK tgt_node_ut 00:35:07.716 CC examples/nvme/abort/abort.o 00:35:08.653 LINK abort 00:35:09.221 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:35:11.128 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:35:11.696 LINK jsonrpc_server_ut 00:35:11.955 CC test/unit/lib/log/log.c/log_ut.o 00:35:12.524 LINK json_parse_ut 00:35:12.784 LINK log_ut 00:35:13.721 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:35:15.626 CC test/unit/lib/notify/notify.c/notify_ut.o 00:35:17.005 LINK notify_ut 00:35:17.005 LINK lvol_ut 00:35:17.005 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:35:17.573 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:35:17.832 LINK pmr_persistence 00:35:19.210 LINK json_util_ut 00:35:20.148 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:35:21.527 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:35:23.432 LINK nvme_ut 00:35:24.811 CC examples/sock/hello_world/hello_sock.o 00:35:25.379 CC examples/vmd/lsvmd/lsvmd.o 00:35:25.638 LINK hello_sock 00:35:25.638 CC examples/vmd/led/led.o 00:35:25.898 LINK lsvmd 00:35:26.158 LINK nvme_ctrlr_ut 00:35:26.158 LINK led 00:35:26.726 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:35:28.630 LINK json_write_ut 00:35:28.630 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:35:30.007 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:35:32.543 LINK nvme_ctrlr_cmd_ut 00:35:35.079 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:35:35.079 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:35:35.656 LINK tcp_ut 00:35:36.639 LINK nvme_ns_ut 00:35:36.639 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:35:36.898 LINK nvme_ctrlr_ocssd_cmd_ut 00:35:37.157 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:35:38.535 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:35:38.794 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:35:39.362 LINK nvme_ns_cmd_ut 00:35:39.362 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:35:39.362 LINK nvme_ns_ocssd_cmd_ut 00:35:39.930 LINK nvme_pcie_ut 00:35:39.930 LINK nvme_poll_group_ut 00:35:39.930 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:35:39.930 CC examples/nvmf/nvmf/nvmf.o 00:35:40.189 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:35:40.447 LINK nvmf 00:35:40.706 LINK nvme_quirks_ut 00:35:40.965 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:35:40.965 LINK nvme_qpair_ut 00:35:40.965 LINK ctrlr_ut 00:35:41.225 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:35:41.225 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:35:41.225 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:35:41.793 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:35:42.052 LINK nvme_io_msg_ut 00:35:42.311 LINK nvme_transport_ut 00:35:42.311 LINK nvme_pcie_common_ut 00:35:42.879 LINK nvme_tcp_ut 00:35:42.879 LINK nvme_fabric_ut 00:35:43.138 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:35:43.706 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:35:43.706 LINK nvme_opal_ut 00:35:43.706 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:35:44.274 CC examples/util/zipf/zipf.o 00:35:44.531 LINK zipf 00:35:44.789 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:35:45.048 CC examples/thread/thread/thread_ex.o 00:35:45.048 LINK nvme_cuse_ut 00:35:45.307 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:35:45.566 LINK nvme_rdma_ut 00:35:45.566 LINK thread 00:35:45.825 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:35:46.392 LINK ctrlr_bdev_ut 00:35:46.651 LINK subsystem_ut 00:35:46.910 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:35:46.910 LINK ctrlr_discovery_ut 00:35:47.169 CC examples/idxd/perf/perf.o 00:35:47.736 LINK idxd_perf 00:35:48.303 LINK nvmf_ut 00:35:49.680 CC examples/interrupt_tgt/interrupt_tgt.o 00:35:49.939 LINK interrupt_tgt 00:35:50.876 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:35:51.443 LINK dev_ut 00:35:51.702 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:35:51.702 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:35:52.268 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:35:52.836 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:35:52.836 LINK lun_ut 00:35:53.095 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:35:53.354 LINK scsi_ut 00:35:55.288 LINK scsi_bdev_ut 00:35:55.288 LINK rdma_ut 00:35:55.856 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:35:56.115 LINK transport_ut 00:35:57.053 LINK scsi_pr_ut 00:35:58.432 CC test/unit/lib/sock/sock.c/sock_ut.o 00:36:00.969 CC test/unit/lib/sock/posix.c/posix_ut.o 00:36:01.906 LINK sock_ut 00:36:02.844 LINK posix_ut 00:36:04.750 CC test/unit/lib/thread/thread.c/thread_ut.o 00:36:05.688 CC test/unit/lib/util/base64.c/base64_ut.o 00:36:06.282 LINK base64_ut 00:36:08.248 LINK thread_ut 00:36:08.816 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:36:09.384 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:36:09.951 LINK bit_array_ut 00:36:09.951 LINK cpuset_ut 00:36:10.519 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:36:11.456 LINK pci_event_ut 00:36:12.390 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:36:12.958 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:36:13.216 LINK iobuf_ut 00:36:13.217 LINK crc16_ut 00:36:13.476 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:36:13.735 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:36:13.993 LINK crc32_ieee_ut 00:36:14.251 LINK crc32c_ut 00:36:14.251 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:36:15.185 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:36:15.185 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:36:15.185 LINK subsystem_ut 00:36:15.751 LINK crc64_ut 00:36:15.751 CC test/unit/lib/util/dif.c/dif_ut.o 00:36:16.010 LINK rpc_ut 00:36:16.269 CC test/unit/lib/util/iov.c/iov_ut.o 00:36:16.837 LINK iov_ut 00:36:17.405 CC test/unit/lib/util/math.c/math_ut.o 00:36:17.973 LINK math_ut 00:36:17.973 LINK dif_ut 00:36:19.879 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:36:19.879 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:36:20.447 LINK pipe_ut 00:36:20.447 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:36:20.447 LINK idxd_user_ut 00:36:21.017 CC test/unit/lib/util/string.c/string_ut.o 00:36:21.017 CC test/unit/lib/util/xor.c/xor_ut.o 00:36:21.586 LINK string_ut 00:36:21.586 LINK xor_ut 00:36:24.120 LINK vhost_ut 00:36:24.120 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:36:24.120 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:36:24.120 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:36:24.120 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:36:24.120 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:36:24.120 CC test/unit/lib/rdma/common.c/common_ut.o 00:36:24.380 LINK ftl_bitmap_ut 00:36:24.639 LINK ftl_l2p_ut 00:36:25.206 LINK common_ut 00:36:25.206 LINK ftl_io_ut 00:36:25.206 LINK idxd_ut 00:36:25.774 LINK ftl_band_ut 00:36:26.711 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:36:26.711 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:36:27.279 LINK ftl_mempool_ut 00:36:27.847 LINK ftl_mngt_ut 00:36:28.785 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:36:28.785 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:36:30.163 LINK ftl_layout_upgrade_ut 00:36:30.163 LINK ftl_sb_ut 00:37:16.852 json_parse_ut.c: In function ‘test_parse_nesting’: 00:37:16.852 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:37:16.852 616 | test_parse_nesting(void) 00:37:16.852 | ^ 00:37:16.852 14:39:08 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:37:16.852 make[1]: Nothing to be done for 'clean'. 00:37:21.080 14:39:12 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:37:21.080 14:39:12 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:37:21.080 14:39:12 -- common/autotest_common.sh@10 -- $ set +x 00:37:21.080 14:39:12 -- spdk/autopackage.sh@48 -- $ timing_finish 00:37:21.080 14:39:12 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:21.080 14:39:12 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:21.080 14:39:12 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:21.080 + [[ -n 2253 ]] 00:37:21.080 + sudo kill 2253 00:37:21.089 [Pipeline] } 00:37:21.105 [Pipeline] // timeout 00:37:21.110 [Pipeline] } 00:37:21.124 [Pipeline] // stage 00:37:21.130 [Pipeline] } 00:37:21.144 [Pipeline] // catchError 00:37:21.154 [Pipeline] stage 00:37:21.156 [Pipeline] { (Stop VM) 00:37:21.169 [Pipeline] sh 00:37:21.448 + vagrant halt 00:37:23.982 ==> default: Halting domain... 00:37:33.974 [Pipeline] sh 00:37:34.258 + vagrant destroy -f 00:37:37.545 ==> default: Removing domain... 00:37:38.124 [Pipeline] sh 00:37:38.404 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:37:38.413 [Pipeline] } 00:37:38.428 [Pipeline] // stage 00:37:38.433 [Pipeline] } 00:37:38.447 [Pipeline] // dir 00:37:38.453 [Pipeline] } 00:37:38.467 [Pipeline] // wrap 00:37:38.473 [Pipeline] } 00:37:38.487 [Pipeline] // catchError 00:37:38.496 [Pipeline] stage 00:37:38.498 [Pipeline] { (Epilogue) 00:37:38.511 [Pipeline] sh 00:37:38.792 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:53.689 [Pipeline] catchError 00:37:53.691 [Pipeline] { 00:37:53.703 [Pipeline] sh 00:37:53.984 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:53.984 Artifacts sizes are good 00:37:53.992 [Pipeline] } 00:37:54.006 [Pipeline] // catchError 00:37:54.017 [Pipeline] archiveArtifacts 00:37:54.023 Archiving artifacts 00:37:54.276 [Pipeline] cleanWs 00:37:54.291 [WS-CLEANUP] Deleting project workspace... 00:37:54.291 [WS-CLEANUP] Deferred wipeout is used... 00:37:54.319 [WS-CLEANUP] done 00:37:54.321 [Pipeline] } 00:37:54.336 [Pipeline] // stage 00:37:54.342 [Pipeline] } 00:37:54.356 [Pipeline] // node 00:37:54.362 [Pipeline] End of Pipeline 00:37:54.396 Finished: SUCCESS